<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: IoT Builders</title>
    <description>The latest articles on DEV Community by IoT Builders (@iotbuilders).</description>
    <link>https://dev.to/iotbuilders</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iotbuilders"/>
    <language>en</language>
    <item>
      <title>Queues don't make things faster (except when they do)</title>
      <dc:creator>Michael Dombrowski</dc:creator>
      <pubDate>Thu, 28 Sep 2023 18:04:34 +0000</pubDate>
      <link>https://dev.to/iotbuilders/queues-dont-make-things-faster-except-when-they-do-4mm1</link>
      <guid>https://dev.to/iotbuilders/queues-dont-make-things-faster-except-when-they-do-4mm1</guid>
      <description>&lt;h2&gt;
  
  
  From cloud to edge
&lt;/h2&gt;

&lt;p&gt;You may be familiar with various queue-like products including AWS SQS, Apache Kafka, and Redis. These technologies are at home in the datacenter where they're used to reliably and quickly hold and send events for processing. In the datacenter, the consumers of the queue are often able to scale based on the queue size to process a backlog of events more quickly. AWS Lambda, for example, will spawn new instances of the lambda to handle the events to avoid the queue getting too big.&lt;/p&gt;

&lt;p&gt;The world outside the datacenter is quite different though. Queue consumers cannot simply autoscale to handle increased load because the physical hardware is limited.&lt;/p&gt;

&lt;p&gt;Making a queue bigger does not increase your system's transaction rate unless you can scale the processing resources based on the queue size. When processing resources are not scalable, such as within a single physical device, then increasing any queue size will not help that device process transactions any more quickly than with a smaller queue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where queues don't help
&lt;/h2&gt;

&lt;p&gt;Something that I've seen several times when working with customers using AWS IoT Greengrass is that they'll see a log that says something to the effect of "queue is full, dropping this input" and their first instinct is to just make the queue bigger. Making the queue bigger may avoid the error, but only for so long if the underlying cause of why the queue filled is not addressed. If your system has a relatively constant transaction rate (measure in transactions per second (TPS)), then the queue will always fill up and overflow if the TPS going into the queue is higher than the TPS going out of the queue. If the queue capacity is enormous then the overflow may take quite a long time to be reached, but ultimately it will overflow because &lt;code&gt;TPS in &amp;gt; TPS out&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let's now make this more concrete. If we have a lambda function running on an AWS IoT Greengrass device, then that lambda will pick up events from a queue and process them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o9XWZEYg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hxfpz0a6mn8w1taczi8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o9XWZEYg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hxfpz0a6mn8w1taczi8k.png" alt="Happy queue" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's say that the lambda can complete work at a rate of 1 TPS. If new events are added to this lambda's queue at less than or equal to 1 TPS then everything will be fine. If work comes in at 10 TPS though, then the queue is going to overflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x60AJ5jz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82d54lhm6wofhrtzkkfm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x60AJ5jz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82d54lhm6wofhrtzkkfm.png" alt="Overflowing queue" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assume that the lambda has a queue capacity of 100 events. Events are added to the queue at 10 TPS which means that it will fill up and then start overflowing in 11 seconds &lt;code&gt;(100 capacity / (10 TPS in - 1 TPS out) = 11.1 s)&lt;/code&gt;. So we can then make the capacity bigger, but that only extends the time to overflow; it does not prevent the overflow from happening. Fundamentally, the lambda is unable to keep up with the amount of work because 1 TPS is less than 10 TPS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TERful6b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0uqp2knqq08mo7ukcwfc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TERful6b--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0uqp2knqq08mo7ukcwfc.png" alt="Bigger queue, still overflowing" width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now maybe you're thinking that lambdas should scale and fix the problem, "we just need 10 lambdas working at 1 TPS each and the problem is solved". Yes, that is technically correct that if you can perfectly scale 10 instances then the problem would be solved for this level of load, but you need to remember that AWS IoT Greengrass and these lambdas are running on a single physical device. That single device only has so much compute power so that perhaps you can scale to 5 TPS with 5 or 6 lambda instances, but you then hit a brick wall of scaling because of the hardware limits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oyYnskM---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xpnh5p4dtjcx11lagi04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oyYnskM---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xpnh5p4dtjcx11lagi04.png" alt="Consumer scaling limits" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So what can be done at this point? Perhaps the lambda can be optimized to process more quickly, but let's just say that it is as good as it gets. If the lambda cannot be optimized, then the only options are to accept that the queue will overflow and drop events or else you need to find a way to slow down the inputs to the queue.&lt;/p&gt;

&lt;h2&gt;
  
  
  What good are queues then?
&lt;/h2&gt;

&lt;p&gt;You may now think that queues are good for nothing, but of course queues do exist for a reason, you just need to understand what problems they can and cannot help with.&lt;/p&gt;

&lt;p&gt;If the consumer of the queue can scale up the compute resources, such as AWS Lambda (lambda in the cloud, not on AWS IoT Greengrass) with AWS SQS, then a queue certainly makes sense and will help to process the events quickly.&lt;/p&gt;

&lt;p&gt;On a single device, queues can help with bursty traffic. If your traffic is steady like in the example above, then queues won't help you. On the other hand, if you sometimes have 10 TPS and other times have 0 TPS input, then a queue (and even a large queue) can make sense.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aJn7km67--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cs0jbimh6ew0odu5osvl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aJn7km67--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cs0jbimh6ew0odu5osvl.png" alt="Burst of traffic with plenty of room in the queue" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Going back to the example from above, our lambda can process at 1 TPS. Let's say that our input is now very bursty where we'll get 10 TPS for 20 seconds and then 0 TPS for 200 seconds. This means that the queue would receive 200 events during the 20 second period and then would drain to 0 events in the 200 second period since no data is coming in and data is flowing out at 1 TPS. If the queue size was 100 like in the earlier example, then the queue would have overflowed and we'd lose 100 events even though in theory the lambda could have eventually processed them if the queue were large enough. So in this case, making the queue capacity at least 200 is reasonable and should minimize any overflow events.&lt;/p&gt;

&lt;p&gt;To summarize, if &lt;code&gt;average TPS input &amp;gt; average TPS output&lt;/code&gt; then the queue is going to overflow eventually and it does not matter how big you make the queue. The only options are to 1. increase the output TPS, 2. decrease the input TPS, or 3. accept that you will drop events. When your input TPS is relatively constant, keep the queue size small which will be more memory efficient and will show errors due to overflow sooner than a larger queue. Finding problems like this early on can then encourage you to understand the traffic pattern and processing transaction rate so you can then choose one of the 3 options for dealing with overflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application to Greengrass
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Lambda
&lt;/h3&gt;

&lt;p&gt;In this post I used lambda as an example, so how about some specific recommendations for configuring a lambda's max queue size?   &lt;/p&gt;

&lt;p&gt;For a pinned lambda which will not scale based on load, start with a queue size of 10 or less. If you're able to calculate the expected incoming TPS and traffic pattern (steady or bursty) then you can change the queue size based on that data. I would not recommend going beyond perhaps 100-500. If your queue is still overflowing at those sizes then you probably need to find another solution instead of just increasing the size.&lt;/p&gt;

&lt;p&gt;For on-demand lambdas which do scale based on load I'd recommend that you start with a queue size of 2x the number of worker lambdas that you want to have. This way effectively each worker has its own mini-queue of 2 items. Again, the same recommendations from above apply here too if you understand your traffic pattern and can calculate the optimal queue size.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stream Manager
&lt;/h3&gt;

&lt;p&gt;Stream Manager is a Greengrass component which accepts data locally and then (optionally) exports it to various cloud services. It is effectively a queue, connecting the local device to cloud services, where those cloud services are the consumer of the queue. Since it is a queue, the exact same logic applies to it. If data is written faster than the data is exported to the cloud, then eventually the queue is going to overflow and in this use case, some data would be removed from the queue before being exported. It is very important to understand how quickly data is coming into a stream and how quickly it can be exported based on the cloud service limits and your internet connection.&lt;/p&gt;

&lt;h3&gt;
  
  
  MQTT Publish to IoT Core
&lt;/h3&gt;

&lt;p&gt;When publishing from Greengrass to AWS IoT Core, all MQTT messages are queued in what's called the "spooler". This spooler may either store messages in memory or on disk depending on your configuration. The spooler is a queue with a configurable limited size, so again the same logic as to all queues applies to the spooler. AWS IoT Core limits each connection to a maximum of 100 TPS publishing, so if you're attempting to publish faster than 100 TPS through Greengrass, the spooler will inevitably fill up and reject some messages. To resolve this, you'd need to publish more slowly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further reading
&lt;/h2&gt;

&lt;p&gt;For some deeper understanding of queuing see the following resources.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Queuing_Rule_of_Thumb"&gt;Wikipedia - Queuing Rule of Thumb&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.danslimmon.com/2016/08/26/the-most-important-thing-to-understand-about-queues/"&gt;Dan Slimmon - The most important thing to understand about queues&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>greengrass</category>
      <category>iot</category>
      <category>aws</category>
      <category>queue</category>
    </item>
    <item>
      <title>Developing Your First Greengrass Publish Component on Raspberry Pi</title>
      <dc:creator>Nenad Ilic</dc:creator>
      <pubDate>Thu, 20 Jul 2023 14:13:19 +0000</pubDate>
      <link>https://dev.to/iotbuilders/developing-your-first-greengrass-publish-component-on-raspberry-pi-3fg4</link>
      <guid>https://dev.to/iotbuilders/developing-your-first-greengrass-publish-component-on-raspberry-pi-3fg4</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/iotbuilders/aws-iot-greengrass-components-deployment-with-gdk-using-github-actions-245m"&gt;previous blog post&lt;/a&gt;, we delved into the world of AWS IoT Greengrass V2, focusing on the usage of the Greengrass Development Kit (GDK) and how it can be automated with GitHub Actions. Today, we're going to take it a step further. We'll walk through the process of developing your first AWS IoT Greengrass Publish component on a Raspberry Pi.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Setting Up Your Environment
&lt;/h2&gt;

&lt;p&gt;Before we start, ensure that you have your Raspberry Pi is set up by using the &lt;a href="https://dev.to/iotbuilders/fleet-provisioning-for-embedded-linux-devices-with-aws-iot-greengrass-4h8b"&gt;Fleet Provisioning mehtod&lt;/a&gt; as we will be using now those GitHub Actions and the project we created in our previous blog post to deploy our component. In scenarios that you are working with a different OS, you would need to follow the instruction on &lt;a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/quick-installation.html"&gt;installing AWS IoT Greengrass&lt;/a&gt; as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Designing the Message Publisher Component
&lt;/h2&gt;

&lt;p&gt;The first step in creating our messaging system is to create the message publisher component. This component will be responsible for sending data, such as sensor readings or application logs, to a specified topic. &lt;/p&gt;

&lt;p&gt;In the AWS IoT Greengrass, components are defined by a recipe, which is a JSON or YAML file that specifies the component's metadata, lifecycle, configuration, and dependencies. For our message publisher component, we'll need to define a recipe that includes the necessary AWS IoT Greengrass component to publish messages to a topic.&lt;/p&gt;

&lt;p&gt;Here's an example of what the recipe might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
RecipeFormatVersion: "2020-01-25"
ComponentName: "{COMPONENT_NAME}"
ComponentVersion: "{COMPONENT_VERSION}"
ComponentDescription: "A component that publishes temperature data to AWS IoT Core"
ComponentPublisher: "{COMPONENT_AUTHOR}"
ComponentConfiguration:
  DefaultConfiguration:
    accessControl:
      aws.greengrass.ipc.mqttproxy:
        'com.example.pub:mqttproxy:1':
          policyDescription: Allows access to publish the temperature to topic.
          operations:
            - aws.greengrass#PublishToTopic
          resources:
            - 'CPU/info'
Manifests:
  - Platform:
      os: all
    Artifacts:
      - URI: "s3://BUCKET_NAME/COMPONENT_NAME/COMPONENT_VERSION/com.example.pub.zip"
        Unarchive: ZIP
    Lifecycle:
      Run: "python3 -u {artifacts:decompressedPath}/com.example.pub/main.py"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this recipe, we're creating a component with the name &lt;code&gt;com.example.pub&lt;/code&gt;. This component is different from our previous examples due to the inclusion of an &lt;code&gt;accessControl:&lt;/code&gt; configuration. This configuration allows the component to publish messages to a specific MQTT topic.&lt;/p&gt;

&lt;p&gt;In our case, the &lt;code&gt;resource&lt;/code&gt; is set to &lt;code&gt;CPU/info&lt;/code&gt;. This setting means that our component has permission to publish only to the &lt;code&gt;CPU/info&lt;/code&gt; MQTT topic. &lt;/p&gt;

&lt;p&gt;If you need the component to publish to multiple topics, you can extend the list of &lt;code&gt;resources&lt;/code&gt; with additional topic names. Alternatively, if you want the component to have permission to publish to any topic, you can replace &lt;code&gt;CPU/info&lt;/code&gt; with &lt;code&gt;*&lt;/code&gt;. This wildcard character represents all possible topics, granting the component full publishing access. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Implementing the Message Publisher Component
&lt;/h2&gt;

&lt;p&gt;With the component designed, we can now move on to implementing the message publisher component. This involves writing the &lt;code&gt;main.py&lt;/code&gt; script that we referenced in the component recipe.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;main.py&lt;/code&gt; script will use the AWS SDK for Python which uses &lt;code&gt;awsiot.greengrasscoreipc.clientv2&lt;/code&gt; to interact with Greengrass IPC in order to publish messages to a specified AWS IoT topic. Here's an example of what the script would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import time
import json
import awsiot.greengrasscoreipc.clientv2 as clientV2

TOPIC="CPU/info"

def get_cpu_temp():
    temp_file = open("/sys/class/thermal/thermal_zone0/temp")
    cpu_temp = temp_file.read()
    temp_file.close()
    return float(cpu_temp)/1000

def main():
    # Create an IPC client.
    ipc_client = clientV2.GreengrassCoreIPCClientV2()

    while True:
        cpu_temp = get_cpu_temp()
        print("CPU temperature: {:.2f} C".format(cpu_temp))

        # Create a payload.
        payload = json.dumps({"temperature": cpu_temp})

        # Publish the payload to AWS IoT Core.
        resp = ipc_client.publish_to_iot_core(topic_name=TOPIC, qos="1", payload=payload)

        time.sleep(1)  # sleep for 1 second

    ipc_client.close()

if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this script, we're creating a loop that continuously publishes a message to the &lt;code&gt;CPU/info&lt;/code&gt; topic. The message contains a json payload with &lt;code&gt;temperature&lt;/code&gt; value, which could be then extended further in a real-world application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Deploying the Message Publisher Component
&lt;/h2&gt;

&lt;p&gt;For deploying the new component to the device we should extend the &lt;code&gt;deployment.json.template:&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "targetArn": "arn:aws:iot:$AWS_REGION:$AWS_ACCOUNT_ID:thinggroup/$THING_GROUP",
    "deploymentName": "Main deployment",
    "components": {
        "com.example.hello": {
            "componentVersion": "LATEST",
            "runWith": {}
        },
        "com.example.world": {
            "componentVersion": "LATEST",
            "runWith": {}
        },
        "com.example.pub": {
            "componentVersion": "LATEST",
            "runWith": {}
        }
    },
    "deploymentPolicies": {
        "failureHandlingPolicy": "ROLLBACK",
        "componentUpdatePolicy": {
            "timeoutInSeconds": 60,
            "action": "NOTIFY_COMPONENTS"
        },
        "configurationValidationPolicy": {
            "timeoutInSeconds": 60
        }
    },
    "iotJobConfiguration": {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you've followed our previous blog post, you should have your GitHub Actions set up for your repository. This setup allows for automatic deployment of your components. &lt;/p&gt;

&lt;p&gt;When you create a new component and commit it to your repository, make sure to add it to the &lt;code&gt;deployment.json.template&lt;/code&gt;. This step is crucial as it ensures your component is included in the deployment process.&lt;/p&gt;

&lt;p&gt;After committing the new component, the GitHub Actions workflow will trigger, resulting in the automatic deployment of your component to the targeted device, in this case, a Raspberry Pi.&lt;/p&gt;

&lt;p&gt;Once deployed, the component will start running on the Raspberry Pi. It will begin publishing messages to the specified AWS IoT topic. &lt;/p&gt;

&lt;p&gt;To verify that your component is functioning correctly, you can subscribe to the topic in the AWS IoT Core console. Here, you'll be able to observe the incoming messages, confirming that your component is publishing as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog post, we've walked through the process of developing a Greengrass Publish component on a Raspberry Pi. This is a great way to use the messaging system for your IoT applications, and I hope it helps you on your journey with AWS IoT Greengrass.&lt;/p&gt;

&lt;p&gt;For reference, please refer to &lt;a href="https://github.com/aws-iot-builder-tools/greengrass-continuous-deployments"&gt;this&lt;/a&gt; GitHub repo.&lt;br&gt;
Stay tuned for more posts on advanced Greengrass component development and other IoT topics. Happy coding!&lt;/p&gt;

&lt;p&gt;If you have any feedback about this post, or you would like to see more related content, please reach out to me here, or on &lt;a href="https://twitter.com/nenadilic84"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/nenadilic84/"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>greengrass</category>
      <category>aws</category>
      <category>iot</category>
    </item>
    <item>
      <title>A Tool to Validate AWS IoT Rules SQL Statements</title>
      <dc:creator>Alina Dima</dc:creator>
      <pubDate>Wed, 12 Jul 2023 15:22:45 +0000</pubDate>
      <link>https://dev.to/iotbuilders/a-tool-to-validate-aws-iot-rules-sql-statements-jg</link>
      <guid>https://dev.to/iotbuilders/a-tool-to-validate-aws-iot-rules-sql-statements-jg</guid>
      <description>&lt;p&gt;Rules Engine is a feature in AWS IoT Core that allows engineers to filter, decode, and process IoT device data and route this data to 15+ AWS and third-party services. AWS IoT Core Rules Engine currently has support for over 70 distinct SQL functions which can be used in either SELECT or WHERE clauses, 14 distinct operators which can be used in either SELECT or WHERE clauses, all JSON data types, specifying Literal objects in the SELECT and WHERE clauses, case statements, JSON extensions, substitution templates and nested object queries and more.  &lt;/p&gt;

&lt;p&gt;In this post, we explore how to validate AWS IoT Core Rules Engine Rules SQL Statements, by introducing a validation tool which:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Encapsulates the heavy lifting of creating, configuring and cleaning up the AWS resources needed to create, run and validate IoT Rule SQL payload transformations.&lt;/li&gt;
&lt;li&gt;Enables friction free validation of Rules syntax and payload transformations.&lt;/li&gt;
&lt;li&gt;Provides an easily extensible library of sample SQL statements, with input payloads and expected output, allowing you to build your own use-cases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This tool is available on GitHub, &lt;a href="https://github.com/aws-iot-builder-tools/validation-tool-for-aws-iot-rules/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do we need a tool?
&lt;/h2&gt;

&lt;p&gt;To validate SQL syntax and the input/output expectations, developers would normally need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a rule.&lt;/li&gt;
&lt;li&gt;Create and assign actions to the rule.&lt;/li&gt;
&lt;li&gt;Create and assign an IAM Role with valid permissions for the actions.&lt;/li&gt;
&lt;li&gt;Subscribe to the output topic (or monitor the output system), and then publish an MQTT message, to see if the rule works. &lt;/li&gt;
&lt;li&gt;If the Rule execution does not work, they have to check in Amazon CloudWatch IoT Logs or the downstream service logs to see what went wrong. &lt;/li&gt;
&lt;li&gt;If Amazon CloudWatch Logs for IoT are not enabled, developers need to enable them and then try again.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is quite some heavy lifting, if the ultimate purpose is to validate that the payload transformations from provided input into expected output work as expected. &lt;/p&gt;

&lt;p&gt;So this called for creating a tool that enables developers to do a ‘closed box ’- type of validation, where input payload, SQL statement and expected output are provided. The tool executes validation scenarios, and if the Rules Engine parses the input as expected, with the expected output, scenarios succeed, if not they fail and expected versus actual output payload are printed for references. &lt;/p&gt;

&lt;h2&gt;
  
  
  How does the tool work?
&lt;/h2&gt;

&lt;p&gt;The diagram below shows the architecture of the validation tool. You can imagine this tool working like an integration test suite, where you only need to define the inputs, configuration and expected outputs (components you need to define are marked yellow). This inputs are provided as a new JSON file and also referred to as creating a new test case or validation scenario. See the &lt;em&gt;“Adding a new test case”&lt;/em&gt; section below for details.&lt;/p&gt;

&lt;p&gt;The integration test itself (set-up, execution, assertions and expectations and tear down - marked gray on the diagram)  is already part of the tool. Of course, if you discover that you need to implement different behaviour to the one the tool provides for set up/tear down/test execution and assertions/validation, you can extend this tool by creating further integration tests to fit your use case. &lt;/p&gt;

&lt;p&gt;Additionally, the tool also includes a sample library with validation scenarios, comprised to date, of 6 examples from the more popularly reported Rules SQL statement questions. These scenarios can be explored, executed or modified and adapted.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoymejjootnvr8yc21gc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoymejjootnvr8yc21gc.png" alt="Architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To better understand the format of the input, have a look at the current JSON schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "$schema": "https://json-schema.org/draft/2019-09/schema",
  "type": "object",
  "properties": {
    "sqlVersion": {
      "type": "string",
      "description": "SQL version needed for the test. Defaults to latest."
    },
    "topic": {
      "type": "string",
      "description":"This is the AWS IoT MQTT topic that the input payload will be published on. The same topic must be used in the Rule SQL FROM clause. "
    },
    "inputPayload": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "object"
        }
      ],
      "description":"This is the test input payload which will be published during the test execution. "
    },
    "inputSql": {
      "type": "string",
      "description":"This is the SQL statement of the IoT Rule under evaluation. "
    },
    "expectedOutput": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "object"
        },
        {
          "type": "array"
        }
      ]
    },
    "description":"This is the expected output that the input payload will be transformed into after scenario validation execution. "

  },
  "required": [
    "topic",
    "inputPayload",
    "inputSql",
    "expectedOutput"
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Pre-Requisites
&lt;/h3&gt;

&lt;p&gt;To be able to run the existing or new validation scenarios, you need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install &lt;a href="https://docs.npmjs.com/getting-started" rel="noopener noreferrer"&gt;npm&lt;/a&gt;. This project was tested with Node v18.16.0.&lt;/li&gt;
&lt;li&gt;Have an AWS Account and provide Node.js with &lt;a href="https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/setting-credentials-node.html" rel="noopener noreferrer"&gt;credentials&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Repository Structure
&lt;/h3&gt;

&lt;p&gt;The project repository is structured as per screenshot below: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;util&lt;/code&gt; folder containing the configuration, environment and resource set-up  and clean-up utilities. The utilities are called from the validation test automatically as needed.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;validation&lt;/code&gt; folder containing:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;validation-data&lt;/code&gt;: this is a sample library of working examples of SQL statements which validate successfully. You can extend this folder with more validation scenario files as needed. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;input-json-schema&lt;/code&gt;: providing type and description information for mandatory and optional input fields. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;validate-iot-rules&lt;/code&gt;: which is the framework for set up and execution of the provided and new validation scenarios. &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrp2npvkbztozh4mqcmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvrp2npvkbztozh4mqcmq.png" alt="Repo structure"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  To get started
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Clone the GitHub repository: &lt;a href="https://github.com/aws-iot-builder-tools/validation-tool-for-aws-iot-rules" rel="noopener noreferrer"&gt;https://github.com/aws-iot-builder-tools/validation-tool-for-aws-iot-rules&lt;/a&gt;  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure you add your AWS account id in the configuration file: &lt;code&gt;util/config.js&lt;/code&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run:&lt;code&gt;npm install&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By default, the default validation scenario is executed: &lt;code&gt;casting-sensor-payload.json&lt;/code&gt;.  To run the default scenario, run:  &lt;code&gt;npm run test&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To change the default scenario, got to &lt;code&gt;config.js&lt;/code&gt;, and re-set the &lt;code&gt;defaultInputFile&lt;/code&gt; to a file of your choice from the provided files in the &lt;code&gt;validation/validation-data&lt;/code&gt; directory. You can choose the validation scenario you want to execute, by running: &lt;code&gt;npm test -- -inputFile=&amp;lt;existing or newly created file name&amp;gt;&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can also choose to execute all the provides scenarios, by running: &lt;code&gt;npm test -- -inputFile=ALL&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note that:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is a development tool, which is designed to support with faster, friction free testing and validation of SQL statements based on desired inputs and outputs. It is therefore not recommended to run this tool against production environments.&lt;/li&gt;
&lt;li&gt;Running all validation scenarios takes longer as, the tool creates and needed rules, IAM roles and policies before executing any of the tests. &lt;/li&gt;
&lt;li&gt;As tests are executed, expectations are evaluated to either success or failure (with the printed expected and actual result). &lt;/li&gt;
&lt;li&gt;All AWS resources are automatically cleaned up post execution, so there is no manual action needed. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If there is a test failure, you see the comparison between expected and actual output, as shown below: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yxnx5uwsncwf7pi6l91.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yxnx5uwsncwf7pi6l91.png" alt="Example Failure"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Adding a new validation scenario
&lt;/h2&gt;

&lt;p&gt;Adding a new validation scenario based on your use-case is straight-forward. You need to add a new JSON file in the &lt;code&gt;verification-tool-for-aws-iot-rules/validation/validation-data&lt;/code&gt;folder. &lt;/p&gt;

&lt;p&gt;The following requirements hold when adding your own new validation scenario:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The contents of the file must follow the JSON schema (&lt;code&gt;validation/input-json-schema.json&lt;/code&gt;) mentioned in the paragraph above and include all mandatory fields, with the correct data types.
&lt;/li&gt;
&lt;li&gt;You need to also make sure that the topic name is test uses is unique. &lt;/li&gt;
&lt;li&gt;If you add more validation scenarios, bear in mind that you might need to adjust the Jest timeout value. If you want to execute all validation scenarios together in the same test suite after you added more scenarios, you should adjust the timeout value in the test itself, or in the Jest configuration (&lt;code&gt;jest.config.js&lt;/code&gt;). &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Conclusion and Future Improvements
&lt;/h2&gt;

&lt;p&gt;This blog shows an approach which allows developers to experiment with and validate AWS IoT Rules SQL statements faster and friction-free. This is achieved by encapsulating the complexities of AWS resource creation and clean-up, publishing data on topics and republishing actions in a configurable and extensible tool developers can use (&lt;a href="https://github.com/aws-iot-builder-tools/validation-tool-for-aws-iot-rules" rel="noopener noreferrer"&gt;available on GitHub&lt;/a&gt;), which does this heavy lifting. &lt;/p&gt;

&lt;p&gt;This tool is currently in its first iteration. Below is a list of improvements could be considered for future iterations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the current version, the tool is designed to run locally. Ideally it would be integrated in a CI/CD pipeline.&lt;/li&gt;
&lt;li&gt;Add support for MQTT 5 and protobuf.&lt;/li&gt;
&lt;li&gt;Add ability to mock Rules SQL functions which call other AWS and Amazon services, like Amazon DynamoDB or AWS Lambda.&lt;/li&gt;
&lt;li&gt;In the current version, the tool assumes that the rule executes (i.e. that the payload satisfies the WHERE clause). To validate scenarios where input payloads do not satisfy the WHERE clause, a new test case needs to be created, with modified expectations.&lt;/li&gt;
&lt;li&gt;Improve overall execution time and resilience.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information about AWS IoT Core and Rules Engine, have a look at the &lt;a href="https://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html" rel="noopener noreferrer"&gt;AWS IoT Developer Guide&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;If you have validation scenarios you would like to share, or additions to this tool, feel free reach out to me on &lt;a href="https://www.linkedin.com/in/alina-dima/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; or &lt;a href="https://twitter.com/fay_ette" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, or provide feedback on GitHub. &lt;/p&gt;

&lt;p&gt;To get notified about more IoT content, you can additionally subscribe to the &lt;a href="https://www.youtube.com/@iotbuilders" rel="noopener noreferrer"&gt;IoT Builders YouTube channel&lt;/a&gt;. &lt;/p&gt;
&lt;h2&gt;
  
  
  Author&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;


&lt;div class="ltag__user ltag__user__id__934452"&gt;
    &lt;a href="/fay_ette" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F934452%2Fc6e04565-b2d8-4af5-9f35-32989d5e7b88.jpg" alt="fay_ette image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/fay_ette"&gt;Alina Dima&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/fay_ette"&gt;An engineer for about 20 years, love solving real world problems with code. I simplify complex problems to help developer communities build better and faster with AWS IoT. &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



</description>
      <category>iot</category>
      <category>awsiot</category>
      <category>testing</category>
    </item>
    <item>
      <title>Batch Ingestion of IoT Device Metrics into Amazon CloudWatch Metrics, using Embedded Metric Format (EMF)</title>
      <dc:creator>Alina Dima</dc:creator>
      <pubDate>Thu, 22 Jun 2023 10:37:34 +0000</pubDate>
      <link>https://dev.to/iotbuilders/batch-ingestion-of-iot-device-metrics-into-amazon-cloudwatch-metrics-using-embedded-metric-format-emf-3o6b</link>
      <guid>https://dev.to/iotbuilders/batch-ingestion-of-iot-device-metrics-into-amazon-cloudwatch-metrics-using-embedded-metric-format-emf-3o6b</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This blog demonstrates how to generate, ingest, store and vizualize IoT device metrics by using the Embedded Metric Format (&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html"&gt;EMF&lt;/a&gt;) and native integrations between AWS IoT Core Rules Engine and Amazon CloudWatch, as well as CloudWatch Logs and Metrics. The Amazon CloudWatch Embedded Metric Format is a JSON specification used to instruct Amazon CloudWatch Logs to automatically extract metric values embedded in structured log events. &lt;/p&gt;

&lt;p&gt;In this post, we will see how to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ingest metric values embedded in structured log events in batch mode via the AWS IoT Core Rules Engine and the Basic Ingest feature. &lt;/li&gt;
&lt;li&gt;Store these log events in Amazon CloudWatch.&lt;/li&gt;
&lt;li&gt;View the metrics in Amazon CloudWatch Metrics and create graphs on the extracted metric values.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For a detailed walk-through a live demo of this solution, you can watch the video linked below on the &lt;a href="https://www.youtube.com/@iotbuilders"&gt;IoT Builders YouTube channel&lt;/a&gt;:  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Batch Ingestion of IoT Device Metrics into Amazon CloudWatch Metrics&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/d0hauFXRkok"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of this Approach
&lt;/h2&gt;

&lt;p&gt;The benefit of using Basic Ingest is optimizing the data flow by removing the AWS IoT Core MQTT Broker from the ingestion path, thus removing the messaging costs. &lt;/p&gt;

&lt;p&gt;On the device-side, sending metrics in batches is more efficient and caters for temporary connectivity loss. The EMF timestamps ensure that metrics are stored in an eventually consistent manner.  The integration between AWS IoT Rules Engine and Amazon CloudWatch happens via a Rules Action, therefore reducing the need for bespoke code. Another  benefit of this approach is that Amazon CloudWatch automatically extracts the metrics from logs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In this post, we will walk through the steps to generate IoT device metrics in the EMF format, like system or OS information at a sampling interval, batch them and ingest them using Basic Ingest, at a chosen reporting interval. This means that metrics are consolidated and sent in batches (independently of the collection time), instead of being routed event by event (one-by-one as they occur). &lt;/p&gt;

&lt;p&gt;Once the batched metrics arrive at the IoT Rule, they are routed to Amazon CloudWatch using &lt;code&gt;batchMode&lt;/code&gt;. &lt;code&gt;batchMode&lt;/code&gt; is a Boolean parameter within the AWS IoT CloudWatch Logs rule action. This parameter is optional and is off (false) by default. To upload device-side log files in batches, you must turn this parameter on (true) when you create the AWS IoT rule.  &lt;/p&gt;

&lt;p&gt;Because the logs follow the EMF specification, Amazon CloudWatch automatically extracts the metric values embedded in structured log events, and we can create graphs and alarms on the extracted metric values. &lt;/p&gt;

&lt;p&gt;The diagram below shows the ingestion flow: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7RbxhnjD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jw9y9lqxtfxm4wl9yiso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7RbxhnjD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jw9y9lqxtfxm4wl9yiso.png" alt="IoT Metrics Ingestion" width="800" height="763"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set-up
&lt;/h2&gt;

&lt;p&gt;To run this demo, clone the repo: &lt;a href="https://github.com/aws-iot-builder-tools/emf-metrics-with-iot-rules"&gt;https://github.com/aws-iot-builder-tools/emf-metrics-with-iot-rules&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This repository is composed of two folders: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;app&lt;/code&gt; - containing the application code and configuration.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;infra&lt;/code&gt; - containing the infrastructure CDK code and configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will need both to set up the demo. &lt;/p&gt;

&lt;p&gt;To run the demo, the following steps should be performed:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Deploy the required AWS resources:
&lt;/h3&gt;

&lt;p&gt;The AWS resources for this demo are created and deployed using AWS CDK. The CDK Typescript code creates and deploys: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An IoT device policy, allowing the device to connect to AWS IoT Core and publish data on the Basic Ingest Rule topic. This policy looks as below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "iot:Connect"
      ],
      "Resource": [
        "arn:aws:iot:&amp;lt;AWS_REGION&amp;gt;:&amp;lt;AWS_ACCOUNT&amp;gt;:client/${iot:ClientId}"
      ],
      "Effect": "Allow"
    },
    {
      "Action": [
        "iot:Publish"
      ],
      "Resource": [
        "arn:aws:iot:&amp;lt;AWS_REGION&amp;gt;:&amp;lt;AWS_ACCOUNT&amp;gt;:topic/$aws/rules/emf/${iot:ClientId}/logs"
      ],
      "Effect": "Allow"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;An IoT Rule for Basic Ingest, configured with an action to batch ingest log entries into Amazon CloudWatch.&lt;/li&gt;
&lt;li&gt;An IAM Role with the correct IAM policy allowing the IoT Rule to ingest into CloudWatch. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pre-requisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You must have AWS CDK installed and configured with the required credentials for your AWS Account. For help with this, follow the steps in the &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/work-with-cdk-typescript.html"&gt;documentation&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the &lt;code&gt;infra&lt;/code&gt;  directory , run &lt;code&gt;npm run build&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;cdk deploy&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  2. Run the IoT Device Simulation Application:
&lt;/h3&gt;

&lt;p&gt;The IoT application connects to AWS IoT Core using an MQTT client implemented with &lt;a href="https://github.com/mqttjs"&gt;MQTT.Js&lt;/a&gt;, reads operating system metrics on a sampling interval of 5 seconds, stores them in an in-memory array and ingests them in batches at a reporting interval of 15 seconds. In the &lt;code&gt;app&lt;/code&gt; folder, there is also a utility function which, in an idempotent manner, creates the IoT thing, certificate and keys.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pre-requisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To run the IoT application, you need to ensure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;That your application has the correct credentials to make calls to AWS IoT to create the IoT thing, certificate and keys (More info &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;That the correct &lt;code&gt;AWS Region&lt;/code&gt; is also configured. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fill in &lt;code&gt;config.js&lt;/code&gt; with your IoT endpoint configuration:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const config = {
    iotEndpoint: "&amp;lt;YOUR_AWS_IOT_ENDPOINT&amp;gt;",
    region: "&amp;lt;YOUR_AWS_REGION&amp;gt;"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Run &lt;code&gt;npm install&lt;/code&gt; in the &lt;code&gt;app&lt;/code&gt; directory.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;node app.js&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Metrics Format and Ingestion
&lt;/h2&gt;

&lt;p&gt;On the device, metric values are embedded in structured log events, so that Amazon CloudWatch can automatically extract them. In the example, memory metrics and network metrics are stored in 2 different namespaces &lt;code&gt;iot-device-memory&lt;/code&gt; and &lt;code&gt;iot-device-network&lt;/code&gt;. The format looks as follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const statObject = {
    "_aws": {
        "Timestamp": Date.now(),
        "CloudWatchMetrics": [
            {
                "Namespace": "iot-device-memory",
                "Dimensions": [["thingName"]],
                "Metrics": [
                    {
                        "Name": "total",
                        "Unit": "Kb",
                        "StorageResolution": 1
                    },
                    {
                        "Name": "free",
                        "Unit": "Kb",
                        "StorageResolution": 1
                    },
                    {
                        "Name": "used",
                        "Unit": "Kb",
                        "StorageResolution": 1
                    },
                    {
                        "Name": "active",
                        "Unit": "Kb",
                        "StorageResolution": 60
                    }, {
                        "Name": "available",
                        "Unit": "Kb",
                        "StorageResolution": 1
                    },
                ]
            },
            {
                "Namespace": "iot-device-network",
                "Dimensions": [["thingName"]],
                "Metrics": [
                    {
                        "Name": "operstate",
                         "Unit": "String",
                        "StorageResolution": 1
                    },
                    {
                        "Name": "rx_bytes",
                        "Unit": "Kb",
                        "StorageResolution": 1
                    },
                    {
                        "Name": "rx_dropped",
                        "Unit": "Kb",
                        "StorageResolution": 1
                    },
                    {
                        "Name": "rx_errors",
                        "Unit": "Count",
                        "StorageResolution": 1
                    },
                    {
                        "Name": "tx_bytes",
                        "Unit": "Kb",
                        "StorageResolution": 1
                    }, {
                        "Name": "tx_dropped",
                        "Unit": "Kb",
                        "StorageResolution": 1
                    },
                    {
                        "Name": "tx_errors",
                        "Unit": "Count",
                        "StorageResolution": 1
                    }, {
                        "Name": "ms",
                        "Unit": "Milliseconds",
                        "StorageResolution": 1
                    },
                ]
            }
        ]
    },
    "thingName": CLIENT_ID,
    "total": convertSize(memory.total, "KB"),
    "free": convertSize(memory.free,"KB"),
    "used": convertSize(memory.used,"KB"),
    "active": convertSize(memory.active,"KB"),
    "available": convertSize(memory.available,"KB"),
    "iface": iface,
    "operstate": network_0.operstate,
    "rx_bytes": convertSize(network_0.rx_bytes,"KB"),
    "rx_dropped": convertSize(network_0.rx_dropped,"KB"),
    "rx_errors": network_0.rx_errors,
    "tx_bytes": convertSize(network_0.tx_bytes,"KB"),
    "tx_dropped": convertSize(network_0.tx_dropped,"KB"),
    "tx_errors": network_0.tx_errors,
    "ms": network_0.ms,
    "requestId": v4()
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The &lt;code&gt;"convert-size"&lt;/code&gt; JavaScript library is used to convert from bytes to kilobytes.  Every 5 seconds (as configured via the sampling interval), a new entry is collected in the &lt;code&gt;statsObject&lt;/code&gt;, and added to an in-memory array. Every 15 seconds (as configured by the reporting interval), a batch object is constructed containing the array, and this object is published into the Basic Ingest IoT Rule Topic, as below:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let message = {
                batch: []
            }
            metrics.forEach(metric =&amp;gt; {
                message.batch.push(
                    {"timestamp": Date.now(), "message": JSON.stringify(metric)});
            })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client.publish(METRICS_PUB_TOPIC, JSON.stringify(message), {
                qos: 1,
                properties: {
                    contentType: 'application/json',
                    correlationData: JSON.stringify({messageId: messageId, appId: appId})
                }
            });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;On the AWS cloud-side, an IoT Rule is created as per CDK code below:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new CfnTopicRule(this, 'BasicIngestEMFIoTRule', {
            ruleName: 'emf',
            topicRulePayload: {
                actions: [
                    {
                        cloudwatchLogs: {
                            logGroupName: LOG_GROUP_NAME,
                            roleArn: CWRole.roleArn,
                            batchMode: true,
                        }
                    }
                ],
                description: 'IoT Rule',
                sql: `SELECT VALUE *.batch
                      FROM '${METRICS_RULE_TOPIC}'`,
                ruleDisabled: false,
                awsIotSqlVersion: '2016-03-23'
            }
        });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The video below shows the application log for storing and publishing the metrics in batches:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S_gBoXQ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gnf8g765x0951e911gv9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S_gBoXQ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gnf8g765x0951e911gv9.gif" alt="EMF generation IoT" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Amazon CloudWatch Logs and CloudWatch Metrics
&lt;/h2&gt;

&lt;p&gt;Upon batch ingestion, each item in the batch will be stored as a separate entry in CloudWatch logs. Because the format used is EMF, the metrics are extracted by Amazon CloudWatch and available in CloudWatch Metrics to create charts and alarms. Below is a video of how the log entries look with the LiveTail view: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PnKo5Bcw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2w50y3w8zqpmzdm7d9ug.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PnKo5Bcw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2w50y3w8zqpmzdm7d9ug.gif" alt="Tail CW" width="800" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each element in the batch sent from the IoT device is stored as a separate log entry, with the correct reporting timestamped passed from the device. The sampling timestamps for the metrics become relevant when the metrics are extracted. Below is a log entry view from the LiveTail:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cfBMrGuu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0tef3jjb2sdly34vafq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cfBMrGuu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0tef3jjb2sdly34vafq.png" alt="LiveTail view" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigating to Amazon CloudWatch Metrics, you can see the two newly created namespaces with thing name as the dimension. By clicking one of the namespaces, you can plot the metrics values over time, with the desired aggregations, as shown in the video below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mdBV_nPZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w0arct4rfviqug6e10mo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mdBV_nPZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w0arct4rfviqug6e10mo.gif" alt="Metrics Graphs" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, we looked at how to leverage Basic Ingest and the AWS IoT CloudWatch Logs Action in Batch Mode, in order to ingest and route device metrics in &lt;code&gt;EMF&lt;/code&gt; format. The benefit of this approach is that Amazon CloudWatch automatically extracts the metrics from logs. Additionally, ingestion costs are reduced by using Basic Ingest. In case of temporary loss of connectivity, the batching functionality will ensure eventual data consistency.&lt;/p&gt;

&lt;p&gt;Once the metrics are in the cloud, they can be viewed in CloudWatch metrics, to prepare charts or set alarms. For more information on uploading IoT device logs to Amazon CloudWatch, have a look at the &lt;a href="https://docs.aws.amazon.com/iot/latest/developerguide/upload-device-logs-to-cloudwatch.html"&gt;developer documentation&lt;/a&gt;. To learn more about EMF, check the &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html"&gt;specification&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;To get notified about more IoT content, you can additionally subscribe to the &lt;a href="https://www.youtube.com/@iotbuilders"&gt;IoT Builders YouTube channel&lt;/a&gt;. &lt;/p&gt;
&lt;h2&gt;
  
  
  Author&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;


&lt;div class="ltag__user ltag__user__id__934452"&gt;
    &lt;a href="/fay_ette" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fgnop5T5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://res.cloudinary.com/practicaldev/image/fetch/s--ZqZiFyKC--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/934452/c6e04565-b2d8-4af5-9f35-32989d5e7b88.jpg" alt="fay_ette image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/fay_ette"&gt;Alina Dima&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/fay_ette"&gt;An engineer for about 20 years, love solving real world problems with code. I simplify complex problems to help developer communities build better and faster with AWS IoT. &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Enriching Payloads with MQTT 5 Metadata, using AWS IoT Core Rules Engine</title>
      <dc:creator>Alina Dima</dc:creator>
      <pubDate>Thu, 08 Jun 2023 13:18:13 +0000</pubDate>
      <link>https://dev.to/iotbuilders/enriching-payloads-with-mqtt-5-metadata-using-aws-iot-core-rules-engine-4oh</link>
      <guid>https://dev.to/iotbuilders/enriching-payloads-with-mqtt-5-metadata-using-aws-iot-core-rules-engine-4oh</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This blog post explains how to build a Vehicle Command Log Store to keep track of command requests and responses sent to vehicles by application clients. To this purpose, we look at extracting MQTT 5 metadata information from MQTT messages published using MQTT v5, and enrich payloads via an AWS IoT Core Rules Engine Rules. Processed data is stored in an Amazon DynamoDB table.&lt;/p&gt;

&lt;p&gt;The goal is to use as little bespoke code on the cloud side as possible and lean on native integrations on the AWS Cloud side, like the IoT Rule with Dynamo DB Action.  &lt;/p&gt;

&lt;p&gt;For a detailed walk-through a live demo of this solution, you can watch the video linked below on the &lt;a href="https://www.youtube.com/@iotbuilders" rel="noopener noreferrer"&gt;IoT Builders YouTube channel&lt;/a&gt;:  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Enriching Payloads with MQTT 5 Metadata&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/acUzYWCYK0M"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  MQTT 5 Request/Response Pattern
&lt;/h2&gt;

&lt;p&gt;As part of this blog post, we exploring a feature of MQTT 5: the &lt;code&gt;Request/Response pattern&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;The Request/Response messaging pattern is a method to track responses to client requests in an asynchronous way. It’s a mechanism implemented in MQTTv5 to allow the publisher to specify a topic for the response to be sent on a particular request. When the subscriber receives the request, it also receives the topic to send the response on. This pattern supports a correlation data field that allows tracking of packets, e.g. request or device identification parameters.&lt;/p&gt;

&lt;p&gt;Let's look at an example: &lt;/p&gt;

&lt;p&gt;We want to send commands to vehicles over MQTT from client applications. The flow is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Vehicles subscribe to the request topic, and client applications subscribe to their decided response topics, which can be dependent on the application instance id, for example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;App clients publish requests on the request topic. In our case, the payload is a simple text &lt;code&gt;DOOR_LOCK&lt;/code&gt; indicating the command to lock the vehicle doors. Following the MQTT 5 pattern, in addition to the payload, we are sending metadata like the &lt;code&gt;Response Topic&lt;/code&gt;, &lt;code&gt;Content Type&lt;/code&gt; and &lt;code&gt;Correlation Data&lt;/code&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a &lt;code&gt;request id&lt;/code&gt;, &lt;/li&gt;
&lt;li&gt;a &lt;code&gt;timestamp&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;a &lt;code&gt;user id&lt;/code&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This data helps correlate MQTT requests and their responses. &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;As the device receives the MQTT message, it executes the command and publishes the response on the specified response topic. Upon publishing, it re-send the correlation data, but also appends response specific information, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a &lt;code&gt;response timestamp&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;the &lt;code&gt;command&lt;/code&gt; it is responding to, 
marked red on the diagram.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The app instance receives this information on the response topic it subscribed to previously. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37hsgcrgulwurxwzb5qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37hsgcrgulwurxwzb5qw.png" alt="MQTT 5 Example Flow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites and Approach
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To recreate this demo locally, one needs to already have in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An AWS Account with permissions to create IoT resources.&lt;/li&gt;
&lt;li&gt;Two created AWS IoT things for the app and car simulators, with locally stored certificates and keys to connect to AWS IoT Core.&lt;/li&gt;
&lt;li&gt;An IoT Core policy allowing connections, subscriptions and data publishing for both IoT Things on the configured topics.&lt;/li&gt;
&lt;li&gt;An Amazon DynamoDB table needs to be created beforehand, with a unique identifier ‘msgId’ string as primary key. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This section looks at the steps to be performed: &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1:  Setting up the the MQTTJS clients
&lt;/h3&gt;

&lt;p&gt;First, we build two client simulators, for the car and the application client. We use &lt;a href="https://github.com/mqttjs/MQTT.js/" rel="noopener noreferrer"&gt;MQTTJS for Javascript&lt;/a&gt; and its MQTT v5 implementation support.  Step 1 below will describe the details of the MQTT client implementation. &lt;/p&gt;

&lt;p&gt;Upon running the application simulator, it connects, subscribes to the response topic and stays connected. Every 10 seconds, the application simulator publishes a &lt;code&gt;DOOR_LOCK&lt;/code&gt; text message command. Upon running the car simulator, it connects and stays connected, then receives a command from the app client, simulates its execution with a JavaScript timer and sends back a  &lt;code&gt;DOOR_LOCK_SUCCESS&lt;/code&gt; response. &lt;/p&gt;

&lt;p&gt;The MQTT Request/Response pattern implementation is exemplified in the code snippets below:&lt;/p&gt;

&lt;h4&gt;
  
  
  Car Simulator
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Connection options and connection creation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//create options for MQTT v5 client
        const options = {
            clientId: CLIENT_ID,
            host: ENDPOINT,
            port: PORT,
            protocol: 'mqtts',
            protocolVersion: 5,
            cert: fs.readFileSync(CERT_FILE),
            key: fs.readFileSync(KEY_FILE),
            reconnectPeriod: 0,
            enableTrace: false
        }
        //connect
        const client = mqtt.connect(options);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;MQTT Event Handlers implementation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client.on('connect', (packet) =&amp;gt; {
            console.log('connected');
            console.log('subscribing to', SUB_TOPIC);
            client.subscribe(SUB_TOPIC);
        });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//handle Messages
        client.on('message', (topic, message, properties) =&amp;gt; {
            console.log('Received Message',message.toString());
            console.log('Received Message Properties', properties );

            if(message &amp;amp;&amp;amp; message.toString() === 'DOOR_LOCK' ) {
                console.log('Executing', JSON.stringify(message));
                setTimeout(() =&amp;gt; {
                    const response = 'DOOR_LOCK_SUCCESS';
                    const responseTopic = properties.properties.responseTopic;
                    console.log("ResponseTopic: ", responseTopic);
                    console.log('Publishing Response: ', response," to Topic: ", responseTopic.toString());
                    const cDataString = properties.properties.correlationData.toString();
                    console.log('Correlation Data String: ', cDataString)
                    let correlationDataJSON = JSON.parse(cDataString);
                    correlationDataJSON.resp_ts = new Date().toISOString();
                    correlationDataJSON.cmd = message.toString();

                    client.publish(responseTopic.toString(), response, {
                        qos: 1,
                        properties: {
                            contentType: 'text/plain',
                            correlationData: JSON.stringify(correlationDataJSON)
                        }
                    }, (error, packet) =&amp;gt; {
                          //ERROR Handlers here.
                    });
                }, 5000);
            }
        });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  App Client Simulator
&lt;/h4&gt;

&lt;p&gt;Connection settings are the same as above.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MQTT Event Handlers implementation:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//handle the message
        client.on('message', (topic, message,  properties) =&amp;gt; {
        //Only log for now.
            console.log('Received message: ', message.toString());
            console.log('Received message properties: ', properties );   
        });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Publish the command on Interval for testing purposes:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//publish
        setInterval(() =&amp;gt; {
            const requestId = v4();
            // const message = JSON.stringify({ping: 'pong'});
            console.log('Publishing message ');
            client.publish(PUB_TOPIC, 'DOOR_LOCK', {
                    qos: 1, properties: {
                        responseTopic: SUB_TOPIC,
                        contentType: 'text/plain',
                        correlationData: JSON.stringify({
                            requestId: requestId,
                            userId: userId,
                            req_ts: new Date().toISOString()
                        })
                    }
                }
                , (error, packet) =&amp;gt; {
                    console.log(JSON.stringify(packet))
                })
        }, 10000);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 2: AWS IoT Rule with Amazon DynamoDB Action
&lt;/h3&gt;

&lt;p&gt;An AWS IoT Rule can be used to enrich the message payload with MQTT 5 response topic information, as well as correlation data and content type. We are interested in both command requests and responses, each of them published to different MQTT topics. Therefore, the IoT Rule we create must select data from both topics, as show in the diagram below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcy2ak292vbqrvx5x9sb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcy2ak292vbqrvx5x9sb.png" alt="Iot Rule for MQTT 5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because we are building a &lt;em&gt;Vehicle Command Log Store&lt;/em&gt; and in this scenario such payloads are sent with content-type &lt;code&gt;text/plain&lt;/code&gt;, we are filtering only for such messages in the Rule SQL statement. &lt;/p&gt;

&lt;p&gt;As the desired behavior is storing the command messages and metadata in a Dynamo DB table, the rule will prepare a new JSON object to be stored. Each key will be stored as a separate column in Amazon Dynamo DB. A new unique message identifier (&lt;code&gt;msgId&lt;/code&gt;) is created to be the primary key of the table. For extracting the MQTT 5 metadata, the Rule SQL uses the &lt;a href="https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-functions.html" rel="noopener noreferrer"&gt;&lt;code&gt;get_mqtt_property(name)&lt;/code&gt;&lt;/a&gt; SQL function. Encoding/Decoding functions are used to manipulate data from and to &lt;code&gt;base64&lt;/code&gt; encoded strings. &lt;/p&gt;

&lt;p&gt;The Rule Action to be specified is Amazon DynamoDB, pointing to the previously created table. &lt;/p&gt;

&lt;p&gt;The IoT Rule SQL is shown below:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT 
{"msgId": newuuid(), 
"name": decode(encode(*, 'base64'), 'base64'), 
"requestId": decode(get_mqtt_property('correlation_data'), 'base64').requestId, "req_ts": decode(get_mqtt_property('correlation_data'), 'base64').req_ts, 
"cmd": decode(get_mqtt_property('correlation_data'), 'base64').cmd, 
"resp_ts": decode(get_mqtt_property('correlation_data'), 'base64').resp_ts, 
"userId": decode(get_mqtt_property('correlation_data'), 'base64').userId, "responseTopic": get_mqtt_property('response_topic') } 
FROM 'cmd/#' 
where get_mqtt_property('content_type') = 'text/plain'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;After running the simulation for some minutes, you should see your Amazon Dynamo DB table populated with data entries as showed in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faefzom8neaq4ch4il4f6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faefzom8neaq4ch4il4f6.png" alt="DynamoDB Entries"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This blog post shows how to use the MQTT 5 IoT Rules SQL functions to extract MQTT 5 metadata from messages and use them to enrich device payloads. One of the design goals was to achieve enrichment and storage with native cloud integrations and no bespoke message processing code.&lt;/p&gt;

&lt;p&gt;The demonstrated Vehicle Command Log Store use-case is just an example, to explore the art of the possible. For more information about the AWS IoT Core Rules Engine SQL functions for MQTT 5 support, have a look at the &lt;a href="https://docs.aws.amazon.com/iot/latest/developerguide/iot-sql-functions.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;If you are interested in detailed walk-through a live demo of this solution, you can watch the &lt;a href="https://youtu.be/acUzYWCYK0M" rel="noopener noreferrer"&gt;YouTube video&lt;/a&gt;. To get notified about more IoT content, you can subscribe to the &lt;a href="https://www.youtube.com/@iotbuilders" rel="noopener noreferrer"&gt;IoT Builders YouTube channel&lt;/a&gt;. &lt;/p&gt;
&lt;h2&gt;
  
  
  Author&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;


&lt;div class="ltag__user ltag__user__id__934452"&gt;
    &lt;a href="/fay_ette" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F934452%2Fc6e04565-b2d8-4af5-9f35-32989d5e7b88.jpg" alt="fay_ette image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/fay_ette"&gt;Alina Dima&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/fay_ette"&gt;An engineer for about 20 years, love solving real world problems with code. I simplify complex problems to help developer communities build better and faster with AWS IoT. &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



</description>
      <category>mqtt5</category>
      <category>iot</category>
      <category>awsiot</category>
      <category>javascript</category>
    </item>
    <item>
      <title>AWS IoT Greengrass Components Deployment with GDK using GitHub Actions</title>
      <dc:creator>Nenad Ilic</dc:creator>
      <pubDate>Tue, 06 Jun 2023 11:01:34 +0000</pubDate>
      <link>https://dev.to/iotbuilders/aws-iot-greengrass-components-deployment-with-gdk-using-github-actions-245m</link>
      <guid>https://dev.to/iotbuilders/aws-iot-greengrass-components-deployment-with-gdk-using-github-actions-245m</guid>
      <description>&lt;p&gt;As IoT deployments grow, a set of distinct challenges arise when it comes application consistency across a broad spectrum of devices: efficiently managing update rollbacks, and maintaining comprehensive, auditable change logs can all present significant obstacles. However, these challenges can be effectively managed using AWS IoT Greengrass and the Greengrass Development Kit (GDK).&lt;/p&gt;

&lt;p&gt;AWS IoT Greengrass enables the deployment of applications directly to your IoT devices, simplifying the management and control of device-side software. The GDK further enhances this capability by streamlining the configuration and packaging of these applications for deployment, making it easier to get your applications onto your devices.&lt;/p&gt;

&lt;p&gt;To create a robust and efficient system, we can also introduce GitOps methodologies. GitOps leverages version control, automated deployments, and continuous monitoring to improve the reliability and efficiency of your deployment processes. By using these methodologies with GitHub Actions, we can automate the deployment process, triggering it with every commit or merge to a specified branch.&lt;/p&gt;

&lt;p&gt;In this blog post, we'll explore how these technologies and methodologies can be combined to create a powerful, scalable, and automated IoT deployment system. We'll walk through the setup of the GDK, the definition and deployment of Greengrass components, and the setup of a GitHub Actions workflow to automate the entire process. &lt;/p&gt;

&lt;p&gt;Let’s dive into and get started with this setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;The AWS IoT Greengrass Development Kit Command-Line Interface (GDK CLI) is an open-source tool designed to streamline the creation, building, and publishing of custom Greengrass components. It simplifies the version management process, allows starting projects from templates or community components, and can be customized to meet specific development needs.&lt;/p&gt;

&lt;p&gt;It can also be effectively utilized with GitHub Actions to automate the process of building and publishing Greengrass components. This could be a crucial part of a Continuous Integration/Continuous Deployment (CI/CD) pipeline. Here's a general outline of how this could be done:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Install Dependencies&lt;/strong&gt;: Create a GitHub Actions workflow file (e.g., &lt;code&gt;.github/workflows/main.yml&lt;/code&gt;) and start with a job that sets up the necessary environment. This includes installing Python and pip (since GDK CLI is a Python tool), AWS CLI, and the GDK CLI itself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure AWS Credentials&lt;/strong&gt;: Use GitHub Secrets to securely store your AWS credentials (Access Key ID and Secret Access Key). In your workflow, configure AWS CLI with these credentials so that the GDK CLI can interact with your AWS account.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build and Publish Components&lt;/strong&gt;: Use GDK CLI commands in your workflow to build and publish your components. For example, you might have steps that run commands like &lt;code&gt;gdk component build&lt;/code&gt; and &lt;code&gt;gdk component publish&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrate with Other Workflows&lt;/strong&gt;: If you have other workflows in your CI/CD pipeline (such as running tests or deploying to other environments), you can use the output of the GDK CLI commands as inputs to these workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this way, every time you push a change to your Greengrass component source code on GitHub, the GDK CLI can automatically build and publish the updated component, ensuring that your Greengrass deployments are always using the latest version of your components.&lt;/p&gt;

&lt;p&gt;Furthermore, we can initiate a Greengrass deployment based on the deployment template in the next stage of the pipeline, after a successful build and publish. This can target a specific Thing Group, enabling us to reflect changes across a fleet of devices. With this approach, if we have dev and test branches, each of these can be mapped to selected Thing Groups. This allows us to perform field validation on selected devices, providing an efficient way to test changes in a controlled environment before wider deployment.&lt;/p&gt;

&lt;p&gt;For further details on creating GitHub Actions workflows, refer to the &lt;a href="https://docs.github.com/en/actions"&gt;GitHub Actions documentation&lt;/a&gt;. For more information about using the GDK CLI, please refer to the &lt;a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/greengrass-development-kit-cli.html"&gt;GDK CLI documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development Project Setup
&lt;/h2&gt;

&lt;p&gt;Based on the high level overview we can structure our project as such:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project
├── .github
│   └── workflows
│       └── main.yml
├── cfn
│   └── github-oidc
│       ├── oidc-provider.yaml
│       └── oidc-role.yaml
└── components
    ├── com.example.hello
    │   ├── gdk-config.json
    │   ├── main.py
    │   └── recipe.yaml
    ├── com.example.world
    │   ├── gdk-config.json
    │   ├── main.py
    │   └── recipe.yaml
    └── deployment.json.template
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where we have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.github/workflows/main.yml&lt;/code&gt;: Which is the GitHub Actions workflow file where the CI/CD pipeline is defined. The GDK CLI and AWS CLI setup, component building, publishing, and deployment tasks are defined here.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cfn/github-oidc&lt;/code&gt;: This directory contains AWS CloudFormation templates (&lt;code&gt;oidc-provider.yaml&lt;/code&gt; and &lt;code&gt;oidc-role.yaml&lt;/code&gt;) that are used to set up an OIDC provider and role on AWS for authenticating GitHub Actions with AWS.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;components&lt;/code&gt;: This directory contains the Greengrass components (&lt;code&gt;com.example.hello&lt;/code&gt; and &lt;code&gt;com.example.world&lt;/code&gt;) that you are developing. Each component has its own directory with:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;gdk-config.json&lt;/code&gt;: This is the configuration file for the GDK CLI for the specific component.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;main.py&lt;/code&gt;: This is the main Python script file for the component's functionality.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;recipe.yaml&lt;/code&gt;: This is the component recipe that describes the component and its dependencies, lifecycle scripts, etc.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deployment.json.template&lt;/code&gt;: This is a deployment template file for Greengrass deployments. It is used to generate the actual deployment file (&lt;code&gt;deployment.json&lt;/code&gt;) that is used when initiating a Greengrass deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GitHub Actions and GDK Deployment Role Setup
&lt;/h2&gt;

&lt;p&gt;The CloudFormation template will be used to create an IAM role (&lt;code&gt;oidc-gdk-deployment&lt;/code&gt;) that provides the necessary permissions for building and deploying Greengrass components using GDK CLI and GitHub Actions. The role has specific policies attached that allow actions such as describing and creating IoT jobs, interacting with an S3 bucket for Greengrass component artifacts, and creating Greengrass components and deployments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: 2010-09-09
Description: 'GitHub OIDC:| Stack: oidc'

Parameters:
  FullRepoName:
    Type: String
    Default: example/gdk-example

Resources:
  Role:
    Type: AWS::IAM::Role
    Properties:
      RoleName: oidc-gdk-deployment
      Policies:
        - PolicyName: iot-thing-group
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - iot:DescribeThingGroup
                  - iot:CreateJob
                Resource:
                  - !Sub arn:aws:iot:${AWS::Region}:${AWS::AccountId}:thinggroup/*
        - PolicyName: iot-jobs
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - iot:DescribeJob
                  - iot:CreateJob
                  - iot:CancelJob
                Resource:
                  - !Sub arn:aws:iot:${AWS::Region}:${AWS::AccountId}:job/*
        - PolicyName: s3-greengrass-bucket
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - s3:CreateBucket
                  - s3:GetBucketLocation
                  - s3:ListBucket
                Resource:
                  - !Sub arn:aws:s3:::greengrass-component-artifacts-${AWS::Region}-${AWS::AccountId}
        - PolicyName: s3-greengrass-components
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:PutObject
                Resource:
                  - !Sub arn:aws:s3:::greengrass-component-artifacts-${AWS::Region}-${AWS::AccountId}/*
        - PolicyName: greengrass-components
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - greengrass:CreateComponentVersion
                  - greengrass:ListComponentVersions
                Resource:
                  - !Sub arn:aws:greengrass:${AWS::Region}:${AWS::AccountId}:components:*
        - PolicyName: greengrass-deployment
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - greengrass:CreateDeployment
                Resource:
                  - !Sub arn:aws:greengrass:${AWS::Region}:${AWS::AccountId}:deployments
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Action: sts:AssumeRoleWithWebIdentity
            Principal:
              Federated: !Sub arn:aws:iam::${AWS::AccountId}:oidc-provider/token.actions.githubusercontent.com
            Condition:
              StringLike:
                token.actions.githubusercontent.com:sub: !Sub repo:${FullRepoName}:*

Outputs:
  OidcRoleAwsAccountId:
    Value: !Ref AWS::AccountId
  OidcRoleAwsRegion:
    Value: !Ref AWS::Region
  OidcRoleAwsRoleToAssume:
    Value: !GetAtt Role.Arn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;FullRepoName&lt;/code&gt; parameter is used to specify the repository that the GitHub Actions workflow will be running in. This is important for the &lt;code&gt;sts:AssumeRoleWithWebIdentity&lt;/code&gt; action in the &lt;code&gt;AssumeRolePolicyDocument&lt;/code&gt;, which allows GitHub Actions to assume this IAM role for the specified repository. &lt;/p&gt;

&lt;p&gt;To deploy this CloudFormation stack, you would use the AWS Management Console, AWS CLI, or an AWS SDK. You would need to specify the &lt;code&gt;FullRepoName&lt;/code&gt; parameter as an input when you create the stack. For example, with the AWS CLI, you would use the &lt;code&gt;aws cloudformation deply&lt;/code&gt; command and provide the template file and the &lt;code&gt;FullRepoName&lt;/code&gt; parameter as inputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudformation deploy \
    --template-file cfn/github-oidc/oidc-role.yaml \
    --stack-name ga-gdk-role \
    --capabilities CAPABILITY_NAMED_IAM \
    --parameter-overrides FullRepoName=&amp;lt;your org&amp;gt;/&amp;lt;your repo name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup lays the foundation for the GitHub Actions and GDK CLI to work together to automate the building and deployment of Greengrass components.&lt;/p&gt;

&lt;p&gt;In scenario that you would like to use OIDC Provider for GitHub actions (suggested) you would also need to  set it up in you AWS account. Please not that this is needed only once per account region:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudformation deploy \
    --template-file cfn/github-oidc/oidc-provider.yaml \
    --stack-name oidc-provider
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can go and prepare our Greengrass components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Greengrass Components GDK Setup
&lt;/h2&gt;

&lt;p&gt;Here we focus here on the configuration and implementation of our Greengrass components. These components form the core of our IoT solution, providing the required functionality on our Greengrass core devices.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;gdk-config.json&lt;/code&gt; file is where we configure our Greengrass component.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "component": {
      "com.example.hello": {
        "author": "Example",
        "version": "NEXT_PATCH",
        "build": {
          "build_system": "zip"
        },
        "publish": {
          "bucket": "greengrass-component-artifacts",
          "region": "eu-west-1"
        }
      }
    },
    "gdk_version": "1.2.0"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the provided example, we have a single component named "com.example.hello". This file specifies the author, the build system (which is set to "zip" here), and the AWS S3 bucket details where the component will be published. The version field is set to "NEXT_PATCH", which means GDK will automatically increment the patch version of the component every time it's built.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;recipe.yaml&lt;/code&gt; file is the recipe for our Greengrass component.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
RecipeFormatVersion: "2020-01-25"
ComponentName: "{COMPONENT_NAME}"
ComponentVersion: "{COMPONENT_VERSION}"
ComponentDescription: "This is a simple Hello World component written in Python."
ComponentPublisher: "{COMPONENT_AUTHOR}"
ComponentConfiguration:
  DefaultConfiguration:
    Message: "Hello"
Manifests:
  - Platform:
      os: all
    Artifacts:
      - URI: "s3://BUCKET_NAME/COMPONENT_NAME/COMPONENT_VERSION/com.example.hello.zip"
        Unarchive: ZIP
    Lifecycle:
      Run: "python3 -u {artifacts:decompressedPath}/com.example.hello/main.py {configuration:/Message}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It contains essential metadata about the component, like its name, version, description, and publisher. It also specifies the default configuration, which, in this case, sets the default message to "Hello". The Manifests section describes the component's artifacts and the lifecycle of the component. In this instance, it specifies the location of the component's zipped artifacts and the command to run the component.&lt;/p&gt;

&lt;p&gt;Finally, the &lt;code&gt;main.py&lt;/code&gt; file is the Python script that our Greengrass component runs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import sys

message = f"Hello, {sys.argv[1]}!"

# Print the message to stdout, which Greengrass saves in a log file.
print(message)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script simply prints out a greeting message. The message is constructed from the argument passed to the script, which comes from the component configuration in the &lt;code&gt;recipe.yaml&lt;/code&gt; file. This setup demonstrates how you can pass configuration to your Greengrass components, as we can see in the “com.example.world” example where we provide “World” as a configuration message.&lt;/p&gt;

&lt;p&gt;In addition to the component configuration and scripts, we also define a &lt;code&gt;deployment.json.template&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "targetArn": "arn:aws:iot:$AWS_REGION:$AWS_ACCOUNT_ID:thinggroup/$THING_GROUP",
    "deploymentName": "Main deployment",
    "components": {
        "com.example.hello": {
            "componentVersion": "LATEST",
            "runWith": {}
        },
        "com.example.world": {
            "componentVersion": "LATEST",
            "runWith": {}
        }
    },
    "deploymentPolicies": {
        "failureHandlingPolicy": "ROLLBACK",
        "componentUpdatePolicy": {
            "timeoutInSeconds": 60,
            "action": "NOTIFY_COMPONENTS"
        },
        "configurationValidationPolicy": {
            "timeoutInSeconds": 60
        }
    },
    "iotJobConfiguration": {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file specifies the deployment configuration for our Greengrass components. The &lt;code&gt;targetArn&lt;/code&gt; is the Amazon Resource Name (ARN) of the Greengrass thing group where we aim to deploy our components. In this example, we're deploying two components, &lt;code&gt;com.example.hello&lt;/code&gt; and &lt;code&gt;com.example.world&lt;/code&gt;, both set to use their latest versions. The &lt;code&gt;deploymentPolicies&lt;/code&gt; section sets the policies for failure handling, component update, and configuration validation. This file is vital as it governs how the deployment of our Greengrass components is handled in the target IoT devices.&lt;/p&gt;

&lt;p&gt;Please note that this is a template and we will be using this in our pipeline to replace the ARN accordingly.&lt;/p&gt;

&lt;p&gt;Taken together, these files form the basis of a Greengrass component. By modifying these templates and scripts, you can create your own custom Greengrass components with GDK. The next step is to set up GitHub Actions to automate the build and deployment of these components.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Actions Setup
&lt;/h2&gt;

&lt;p&gt;This GitHub workflow file sets up two jobs, namely &lt;code&gt;publish&lt;/code&gt; and &lt;code&gt;deploy&lt;/code&gt;, which are run when either a push to the &lt;code&gt;main&lt;/code&gt; branch occurs or the workflow is manually dispatched.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Publish Job
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;publish&lt;/code&gt; job runs on the &lt;code&gt;ubuntu-latest&lt;/code&gt; environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  publish:
    name: Component publish
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read

    steps:
    - name: Checkout
      uses: actions/checkout@v3
      with:
        fetch-depth: 0
        ref: ${{ github.head_ref }}
    - uses: actions/setup-python@v3
      with:
        python-version: '3.9'

    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        role-to-assume: ${{ secrets.OIDC_ROLE_AWS_ROLE_TO_ASSUME }}
        aws-region: ${{ secrets.OIDC_ROLE_AWS_REGION }}

    - name: Install Greengrass Development Kit
      run: pip install -U git+https://github.com/aws-greengrass/aws-greengrass-gdk-cli.git@v1.2.3

    - name: GDK Build and Publish
      id: build_publish
      run: |

        CHANGED_COMPONENTS=$(git diff --name-only HEAD~1 HEAD | grep "^components/" | cut -d '/' -f 2)

        echo "Components changed -&amp;gt; $CHANGED_COMPONENTS"        

        for component in $CHANGED_COMPONENTS
        do
          cd $component
          echo "Building $component ..."
          gdk component build
          echo "Publishing $component ..."
          gdk component publish
          cd ..
        done

      working-directory: ${{ env.working-directory }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We start with checking out the code from the repository and setting up Python 3.9. The AWS credentials are then configured using the &lt;code&gt;aws-actions/configure-aws-credentials@v1&lt;/code&gt; action. The credentials used here are fetched from the GitHub secrets &lt;code&gt;OIDC_ROLE_AWS_ROLE_TO_ASSUME&lt;/code&gt; and &lt;code&gt;OIDC_ROLE_AWS_REGION&lt;/code&gt;. The &lt;code&gt;OIDC_ROLE_AWS_ROLE_TO_ASSUME&lt;/code&gt; secret should contain the ARN of the AWS role that the GitHub Actions should assume when executing the workflow. This is the role we created in the firs step we can obtain it by executing the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudformation describe-stacks --stack-name ga-gdk-role --query 'Stacks[0].Outputs[?OutputKey==`OidcRoleAwsRoleToAssume`].OutputValue' --output text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;OIDC_ROLE_AWS_REGION&lt;/code&gt; secret should contain the AWS region where your resources are located. After that these variables needs to be added under &lt;code&gt;github.com/&amp;lt;org&amp;gt;/&amp;lt;repo&amp;gt;/settings/secrets/actions&lt;/code&gt; .&lt;/p&gt;

&lt;p&gt;Next, the Greengrass Development Kit (GDK) is installed using pip. The GDK CLI is used to build and publish any components that have changed between the current and previous commit. &lt;/p&gt;

&lt;p&gt;The changed components are identified by looking at the differences between the current and previous commit and extracting the component names. Here it is important that the name of the folder mathces the name of the component.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Deploy Job
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;deploy&lt;/code&gt; job runs after the &lt;code&gt;publish&lt;/code&gt; job has completed successfully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  deploy:
    name: Component deploy
    runs-on: ubuntu-latest
    needs: publish
    permissions:
      id-token: write
      contents: read

    steps:
    - name: Checkout
      uses: actions/checkout@v3
      with:
        fetch-depth: 0
        ref: ${{ github.head_ref }}

    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        role-to-assume: ${{ secrets.OIDC_ROLE_AWS_ROLE_TO_ASSUME }}
        aws-region: ${{ secrets.OIDC_ROLE_AWS_REGION }}

    - name: Deploy Greengrass components
      run: |
        export AWS_ACCOUNT_ID=$(aws sts get-caller-identity |  jq -r '.Account')
        export AWS_REGION=${GREENGRASS_REGION}
        # Thing Group is the name of the branch
        export THING_GROUP=${GITHUB_REF#refs/heads/}

        CHANGED_COMPONENTS=$(git diff --name-only HEAD~1 HEAD | grep "^components/" | cut -d '/' -f 2)

        if [ -z "$CHANGED_COMPONENTS" ]; then
          echo "No need to update deployment"
        else
          envsubst &amp;lt; "deployment.json.template" &amp;gt; "deployment.json"

          for component in $CHANGED_COMPONENTS
          do
            version=$(aws greengrassv2 list-component-versions \
              --output text \
              --no-paginate \
              --arn arn:aws:greengrass:${AWS_REGION}:${AWS_ACCOUNT_ID}:components:${component} \
              --query 'componentVersions[0].componentVersion')

            jq '.components[$component].componentVersion = $version' --arg component $component --arg version $version deployment.json &amp;gt; "tmp" &amp;amp;&amp;amp; mv "tmp" deployment.json

          done

          # deploy
          aws greengrassv2 create-deployment \
            --cli-input-json file://deployment.json \
            --region ${AWS_REGION}

          echo "Deployment finished!"
        fi

      working-directory: ${{ env.working-directory }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It follows a similar structure to the &lt;code&gt;publish&lt;/code&gt; job, starting with checking out the code from the repository and configuring AWS credentials using the same secrets.&lt;/p&gt;

&lt;p&gt;In the deployment step, it first identifies any changed components in a similar way as in the &lt;code&gt;publish&lt;/code&gt; job. If no components have changed, it does not proceed with deployment. If there are changed components, it prepares a &lt;code&gt;deployment.json&lt;/code&gt; file from the template, replacing placeholders with the actual values. It then gets the version of the changed components from AWS Greengrass and updates the &lt;code&gt;deployment.json&lt;/code&gt; file with these versions.&lt;/p&gt;

&lt;p&gt;Finally, it creates a deployment using the &lt;code&gt;aws greengrassv2 create-deployment&lt;/code&gt; command, providing the &lt;code&gt;deployment.json&lt;/code&gt; file as input and setting the region to the one specified in the &lt;code&gt;AWS_REGION&lt;/code&gt; environment variable.&lt;br&gt;
Here it is important to note that Thing Group is taken from the name of the branch &lt;code&gt;THING_GROUP=${GITHUB_REF#refs/heads/}&lt;/code&gt; as that way we can have different branches related to different Thing Groups as discussed above. In case the thing group is not create you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iot create-thing-group --thing-group-name main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, when ever there is a new commit on the main branch a process will kick start and issue a deployment to the specified group of devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog post, we've taken a deep dive into AWS IoT Greengrass V2, focusing on the usage of the Greengrass Development Kit (GDK) and how it can be automated with GitHub Actions. We started by setting up the GDK, explored the various components and how they interact, then we moved on to setting up a GitHub Actions workflow to automate the entire process.&lt;/p&gt;

&lt;p&gt;By leveraging AWS IoT Greengrass, the GDK, and GitHub Actions, you can create a powerful, scalable, and automated IoT solution. Whether you're managing a small group of IoT devices or a large fleet, this approach offers a robust and efficient way to handle your IoT application deployments.&lt;/p&gt;

&lt;p&gt;That's all for this blog post. We hope you found it informative and that it helps you on your journey to creating and managing IoT solutions with AWS IoT Greengrass. Happy coding!&lt;/p&gt;

&lt;p&gt;All the above code and setup can be referenced here:&lt;br&gt;
&lt;a href="https://github.com/aws-iot-builder-tools/greengrass-continuous-deployments"&gt;https://github.com/aws-iot-builder-tools/greengrass-continuous-deployments&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have any feedback about this post, or you would like to see more related content, please reach out to me here, or on &lt;a href="https://twitter.com/nenadilic84"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/nenadilic84/"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>greengras</category>
      <category>aws</category>
      <category>iot</category>
    </item>
    <item>
      <title>Managing Per-Device Configuration with an AWS IoT Greengrass Fleet</title>
      <dc:creator>Michael Dombrowski</dc:creator>
      <pubDate>Thu, 01 Jun 2023 22:41:26 +0000</pubDate>
      <link>https://dev.to/iotbuilders/managing-per-device-configuration-with-an-aws-iot-greengrass-fleet-4obk</link>
      <guid>https://dev.to/iotbuilders/managing-per-device-configuration-with-an-aws-iot-greengrass-fleet-4obk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;One of the best parts of working on AWS IoT Greengrass is the opportunity to talk to our customers and see all the varied use cases they have. Through these discussions we’ve identified some common patterns that I will share here with the wider IoT builder community.&lt;/p&gt;

&lt;p&gt;In this first article, I’ll cover a couple approaches to manage a fleet of many devices using AWS IoT Thing Groups, but with per-device configuration.&lt;/p&gt;

&lt;p&gt;Imagine that you are building a solution for ACME Corporation’s factories to collect sensor readings and upload them into AWS IoT Core using MQTT. There are 10 factories around the world and each factory has 10 lines and 10 cells in a line. In this example, there is 1 Greengrass core device per cell, so this means that you need to manage 10x10x10 = 1,000 Greengrass devices. Greengrass allows you to deploy to a single device or to a group of devices in an AWS IoT Thing Group. This means that you could manage all 1,000 devices uniquely, but that may be an unreasonable operational burden and you instead want to manage the 1,000 devices as a group.&lt;/p&gt;

&lt;p&gt;Now that you decided to manage the devices as a single Thing Group, there is a problem because your solution requires that each device has some amount of unique configuration such as 1. what factory is it in, 2. which line, and 3. which workcell. These configurations are unlikely to ever change, and if they do change then it will be at a very low frequency (less than once per day for example). Other solutions would be more appropriate for high frequency configuration changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;To solve the problem with treating all the devices as a group while needing unique configuration I suggest to create a “configuration holder” Greengrass component such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
RecipeFormatVersion: "2020-01-25"
ComponentName: "ConfigHolder"
ComponentVersion: "1.0.0"
ComponentType: "aws.greengrass.generic"
ComponentDescription: "Holds configuration, does nothing."
ComponentPublisher: "ACME Corp"
ComponentConfiguration:
  DefaultConfiguration: {}
Manifests:
- Lifecycle: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This component will do absolutely nothing on its own, it only exists to hold onto the per-device configuration that you need in your solution. I will show two different ways to use this component, first is using it with a one-time per-device deployment and the second is to setup the component during Greengrass installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  One-time Per-device Deployment
&lt;/h2&gt;

&lt;p&gt;As mentioned before, you can deploy to Greengrass devices either individually or to a group of devices. With this solution, you will deploy the ConfigHolder component with the unique device configuration to the specific individual device that needs that configuration. At the same time, you will have a different Greengrass deployment to a group of devices which deploys your business logic components. Your business logic components will depend on the configuration holder deployment to provide the necessary unique configuration. Any shared configuration may go into the business logic components.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a deployment targeting the individual device&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the ConfigHolder component - Important: Add only the ConfigHolder here and nothing else&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3iyqzjo601ink7mbscy6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3iyqzjo601ink7mbscy6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Configure the ConfigHolder component with the unique configuration&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcevrnzqn7c15nfhw837b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcevrnzqn7c15nfhw837b.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy this deployment and wait for it to complete successfully&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your device now has the ConfigHolder component properly configured with unique configurations, now we need to talk about how to actually use this configuration on the device.&lt;/p&gt;

&lt;p&gt;As an example, I have created a business logic component helpfully named BusinessLogic as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
RecipeFormatVersion: "2020-01-25"
ComponentName: "BusinessLogic"
ComponentVersion: "1.0.0"
ComponentType: "aws.greengrass.generic"
ComponentDescription: "Does the work"
ComponentPublisher: "ACME Corp"
ComponentConfiguration:
  DefaultConfiguration: {}
ComponentDependencies:
  ConfigHolder:
    VersionRequirement: ^1.0.0
Manifests:
- Lifecycle:
    Run: &amp;gt;-
      echo "Running with config from holder: {ConfigHolder:configuration:/workcell}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This component has a dependency on the ConfigHolder component and uses interpolation to extract the unique configuration and use it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important: In this example, BusinessLogic is deployed to the Greengrass core device via a group deployment and ConfigHolder is deployed via an individual deployment. If the Greengrass core device is already in the thing group, the thing group deployment may execute before the individual deployment which means that BusinessLogic will execute when ConfigHolder hasn’t been configured.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are 2 ways to address this issue, the most conceptually simple would be to ensure that the individual deployment completes before adding the device into the thing group which means that the thing group deployment will not execute until ConfigHolder is already configured. The more robust way to handle this is to have validation logic in your business logic which checks to see if the configuration is present and makes sense. If it does not make sense, then your business logic should not execute because it doesn’t have all the information it needs to execute correctly. Make sure that your business logic component does not exit with an error because this will make the thing group deployment fail and rollback (if configured to rollback), and you would then need to manually retry the deployment. When ConfigHolder is later deployed with the configuration, the BusinessLogic component will be restarted because the configuration that is interpolated into the run script has changed.&lt;/p&gt;

&lt;p&gt;Individual device deployments utilize the IoT Thing’s Shadow in order to send the deployment information, you will be charged for the Shadow usage required to execute the deployment on each of your devices. You can avoid this cost and the deployment ordering issue mentioned above by trying the second way to use ConfigHolder, which is to configure it when installing Greengrass.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure ConfigHolder on Installation
&lt;/h2&gt;

&lt;p&gt;With this approach, you will provide an initial configuration file to Greengrass during the installation which contains the ConfigHolder component and the desired unique configuration. This approach is not mutually exclusive to the individual device deployment described above, you may want to use this approach at first and then update the unique configuration over time using individual device deployments.&lt;/p&gt;

&lt;p&gt;You may already use the initial configuration file during installation for port, proxy, or provisioning settings, but if not don’t worry, it is quite simple. When installing Greengrass, add the command line option &lt;code&gt;--init-config initial-config.yaml&lt;/code&gt; this option can be combined with other options that you’re using such as &lt;code&gt;--provision&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I will create a file: &lt;code&gt;initial-config.yaml&lt;/code&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;services:
  ConfigHolder:
    componentType: "GENERIC"
    configuration:
      factory: "Madison"
      line: "1"
      workcell: "weld"
    dependencies: []
    lifecycle: {}
    version: "1.0.0"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here I describe the ConfigHolder component that you’ve seen before along with the same configuration as before. When Greengrass finishes the installation, it will have this component as part of its configuration. It can then receive the thing group deployment and BusinessLogic will be able to pick up the configuration as before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration over IPC
&lt;/h2&gt;

&lt;p&gt;In BusinessLogic, I’ve shown how configuration is interpolated into the recipe and that will work for any component. Now though, I will show how to use the configuration in a more “native” way by using Greengrass IPC to read configuration and subscribe to configuration changes in order to react without restarting the business logic component and without needing to interpolate the configuration into the recipe. I will also take this opportunity to show off a component using NodeJS as we now have a developer preview version of &lt;a href="https://github.com/aws/aws-iot-device-sdk-js-v2/tree/main/samples/node/gg_ipc" rel="noopener noreferrer"&gt;Greengrass IPC for Node&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the code below I use the developer preview version of Greengrass IPC for NodeJS in order to &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get the configuration from ConfigHolder&lt;/li&gt;
&lt;li&gt;Subscribe to changes in configuration in ConfigHolder&lt;/li&gt;
&lt;li&gt;Get the configuration from ConfigHolder again if any of it ever changes
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { greengrasscoreipc } from 'aws-iot-device-sdk-v2';

const CONFIG_COMPONENT = "ConfigHolder";

async function main() {
    try {
        let client = greengrasscoreipc.createClient();

        await client.connect();

        const config = await client.getConfiguration({ componentName: CONFIG_COMPONENT, keyPath: [] });
        console.log("Got initial config", JSON.stringify(config.value));
        console.log("Subscribing to config changes");

        // Setup subscription handle
        const subscription_handle = client.subscribeToConfigurationUpdate({ componentName: CONFIG_COMPONENT, keyPath: [] });
        // Setup listener for config change events
        subscription_handle.on("message", async (event) =&amp;gt; {
            console.log("Config changed, will pull full new config immediately", JSON.stringify(event.configurationUpdateEvent?.keyPath));

            const config = await client.getConfiguration({ componentName: CONFIG_COMPONENT, keyPath: [] });
            console.log("Got new full config", JSON.stringify(config.value));
        });

        // Perform the subscription
        await subscription_handle.activate();
        console.log("Subscribed to config changes");
    } catch (err) {
        console.log("Aw shucks: ", err);
    }
}

main();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full component code is available &lt;a href="https://github.com/aws-greengrass/aws-greengrass-component-examples/tree/main/blog-config-node" rel="noopener noreferrer"&gt;here&lt;/a&gt;, see the readme in the repository for instructions to build and publish the component into your account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I’ve covered multiple ways that you can use two components to have a mix of per-device and fleet-wide configurations. I hope that this blog is helpful to builders who want to simplify Greengrass fleet management while still allowing for per-device configurations as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/MikeDombo" rel="noopener noreferrer"&gt;Follow me on GitHub&lt;/a&gt; and I look forward to your comments and suggestions for future topics in the comments!&lt;/p&gt;

</description>
      <category>greengrass</category>
      <category>iot</category>
      <category>aws</category>
    </item>
    <item>
      <title>Coding IoT with Amazon CodeWhisperer AI Coding Companion</title>
      <dc:creator>Alina Dima</dc:creator>
      <pubDate>Fri, 12 May 2023 13:51:19 +0000</pubDate>
      <link>https://dev.to/iotbuilders/coding-iot-with-amazon-codewhisperer-ai-coding-companion-4abf</link>
      <guid>https://dev.to/iotbuilders/coding-iot-with-amazon-codewhisperer-ai-coding-companion-4abf</guid>
      <description>&lt;p&gt;Amazon CodeWhisperer AI Coding Companion has been &lt;a href="https://aws.amazon.com/about-aws/whats-new/2023/04/amazon-codewhisperer-generally-available/" rel="noopener noreferrer"&gt;generally available&lt;/a&gt; since April 13th, 2023. Naturally, curiosity got the best of me, so I decided to see if I could rapidly build an IoT application, and have the AI do most of the coding.&lt;/p&gt;

&lt;p&gt;The goal of this experiment was to set up the foundations of a simple IoT application, in about 30 minutes. In terms of tools, I used &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/home.html" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt; for infrastructure (TypeScript). I had not used CDK before, even if I had years of experience with AWS CloudFormation, SAM, Amplify. For the IoT app itself, I used JavaScript, the &lt;a href="https://github.com/mqttjs" rel="noopener noreferrer"&gt;MQTT.js&lt;/a&gt; library, and the &lt;a href="https://github.com/mqttjs" rel="noopener noreferrer"&gt;AWS SDK v3&lt;/a&gt; for the AWS IoT Control Plane calls. The idea behind mixing JavaScript and TypeScript was to evaluate how well CodeWhisper could assist me in both languages. &lt;/p&gt;

&lt;p&gt;Curious how good a team CodeWhisperer and me were in this experiment? Keep reading, because I will cover some of the great and no so great experiences of working with CodeWhisperer. &lt;/p&gt;

&lt;p&gt;For those interested in the build details, I have recorded the entire experience. It is available in a 2-episode series on the &lt;a href="https://www.youtube.com/@iotbuilders" rel="noopener noreferrer"&gt;IoT Builders YouTube channel&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Part 1 - CDK for IoT&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/C5OkYqcxcAQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Part 1 - Building the IoT App&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/S9zaFJwzCBs"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The code is also available on GitHub: &lt;a href="http://bit.ly/cdk-iot-sample" rel="noopener noreferrer"&gt;http://bit.ly/cdk-iot-sample&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What did I build?
&lt;/h2&gt;

&lt;p&gt;The IoT application is shown in the diagram below. We have an AWS IoT Thing, connected to AWS IoT Core, subscribed to an MQTT topic, and publishing data on another MQTT topic. An AWS IoT Rule picks up the data and invokes a Lambda action which prints out the MQTT message. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyly0nro48km3toa4kd8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcyly0nro48km3toa4kd8.png" alt="IoT App Example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This application was built in 2 steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The AWS Resources were created using CDK: an IoT Thing policy based on &lt;a href="https://docs.aws.amazon.com/iot/latest/developerguide/example-iot-policies.html" rel="noopener noreferrer"&gt;least privilege&lt;/a&gt; best practices, an IoT Rule, a Lambda action and of course the Resource policy allowing Lambda invocation from the IoT Rule. &lt;/li&gt;
&lt;li&gt;The IoT application code was written to: create an IoT Thing, create the identity, attach the IoT policy to the certificate, and the certificate to the Thing, create and configure the MQTT client, connect the client to IoT Core, subscribe and publish data. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Amazon CodeWhisperer helped in both steps listed above. It was used to speed up development, by making whole-line and full-function code completions, automatically generating functions and code from natural language comments, and accelerating work with third party libraries like MQTT.js. &lt;/p&gt;
&lt;h2&gt;
  
  
  Getting started with Amazon CodeWhisperer
&lt;/h2&gt;

&lt;p&gt;As a user of &lt;a href="https://www.jetbrains.com/idea" rel="noopener noreferrer"&gt;intellij IDEA&lt;/a&gt;, I have integrated CodeWhisperer into my IDE in 3 easy steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 1&lt;/strong&gt;: Installed the latest_ AWS Toolkit plugin_ in IDEA. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 2&lt;/strong&gt;: In the IDE, open the AWS extension panel and click the &lt;em&gt;“Start”&lt;/em&gt; button under &lt;em&gt;Developer Tools → CodeWhisperer&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Step 3&lt;/strong&gt;: In the resulting pop-up, select the “Sign in with Builder ID” option. Use your personal email address to sign up and sign in with &lt;a href="https://docs.aws.amazon.com/signin/latest/userguide/sign-in-aws_builder_id.html" rel="noopener noreferrer"&gt;AWS Builder ID&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdsoh4nvs7pipzfvcg05.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdsoh4nvs7pipzfvcg05.gif" alt="CodeWhisperer SetUp"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  What was the Developer Experience like with CodeWhisperer for IoT?
&lt;/h2&gt;

&lt;p&gt;First off, as we saw above, CodeWhisperer is super easy to set up and start. &lt;/p&gt;
&lt;h3&gt;
  
  
  Generating Code from Comments
&lt;/h3&gt;

&lt;p&gt;CodeWhisperer was efficient at generating code from comments. Some interesting examples were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When prompted with the comment: 
&lt;code&gt;// Create AWS Lambda function, with inline code.&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;the &lt;strong&gt;result&lt;/strong&gt; was:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0al1e199x5yjrct0dz2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0al1e199x5yjrct0dz2n.png" alt="AI Lambda"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Here is another example of CodeWhisperer coming up with the code to create an IoT thing when prompted by comment:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujc6wedrjo94q9gl9t18.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujc6wedrjo94q9gl9t18.gif" alt="AI create IoT Thing"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Auto-Completion
&lt;/h3&gt;

&lt;p&gt;CodeWhisperer is a clever tool for auto-completion.  Below is an example of how I fixed the initially incorrect IoT policy using the auto-completion assistance of CodeWhisperer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ym27hbd01i35y12xlpo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ym27hbd01i35y12xlpo.gif" alt="AI IoT policy fix"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  A &lt;em&gt;Problematic&lt;/em&gt; IoT Policy
&lt;/h3&gt;

&lt;p&gt;While trying to get assistance from CodeWhisperer, I struggled with the creation of an IoT policy, using the CDK SDK for Typescript. It took a few attempts at changing my comment prompts to get to an overly permissive and incorrect IoT policy (see below).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fai57w97qll58syrstjcb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fai57w97qll58syrstjcb.png" alt="Incorrect IoT Policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;comment&lt;/em&gt; I provided was:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;// Create IoT policy for IoT Core. Restrict to account id and region. Allow connect with thing name, subscribe to topic and publish on topic.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Coming up with a correct policy, available &lt;a href="https://github.com/aws-iot-builder-tools/cdk-iot-sample/blob/main/lib/cdk_io_t-stack.ts" rel="noopener noreferrer"&gt;here&lt;/a&gt;, was eventually more effort than expected, although, as shown in the video above, CodeWhisperer did a very good job with auto-completion. &lt;/p&gt;
&lt;h3&gt;
  
  
  Coaxing CodeWhisperer into Creating Correct Code
&lt;/h3&gt;

&lt;p&gt;Sometimes, CodeWhisperer generates code that looks just about right, but actually it is not. In fact, sometimes I got tricked into skipping the correct suggestion and went for the incorrect one. &lt;/p&gt;

&lt;p&gt;So, as a developer, you need to take care to pick the correct suggestions. Below is an example: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3096agsrm902hk4y3az3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3096agsrm902hk4y3az3.gif" alt="Incorrect SDK calls AI"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  How to make the most of CodeWhisperer?
&lt;/h2&gt;

&lt;p&gt;If you want to make the most of working with CodeWhisperer, the following helps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;In general, the more code you already have, the better CodeWhisperer will do.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do your main imports first&lt;/strong&gt; - if you want to use the Control Plane calls for IoT Core, like in my example, import the ClientIoT as a first thing. Afterwards, you CodeWhisperer will anticipate better what you are prompting it to do in your comments. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be very explicit in your comments&lt;/strong&gt; - if you want a JavaScript Lambda function, write that in your comments. Be explicit with the type of resources, the nature of code (if you want a function, say so). The more detailed and explicit your comments are, the better success you will have. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pay attention&lt;/strong&gt; - sometimes CodeWhisperer provides multiple suggestions, and if you do not pay a lot of attention, you get tricked into choosing the incorrect one (as in the example above). Scroll fast between the proposals, and choose the correct one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Know when to give up and move on&lt;/strong&gt; - if full code generation from comments does not work out-of-the-box (like in my example with the IoT policy), count your loses on this one, and use the AWS documentation to start it off correctly, and then use CodeWhisperer for auto-completion, once you are on the right track. &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Conclusion and Useful Links
&lt;/h2&gt;

&lt;p&gt;To sum up, the experiment of building an IoT application with the assistance of CodeWhisperer was a success. I aimed for 30 minutes, but in reality it probably took about 40, excluding explanations, introduction and conclusions parts of the video, or written this blog. &lt;/p&gt;

&lt;p&gt;If you want to watch the entire experience, it is available in 2 episodes on YouTube: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://youtu.be/C5OkYqcxcAQ" rel="noopener noreferrer"&gt;Part 1-CDK&lt;/a&gt;, &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://youtu.be/S9zaFJwzCBs" rel="noopener noreferrer"&gt;Part 2 - IoT App&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The code for the IoT application is available on GitHub: &lt;a href="http://bit.ly/cdk-iot-sample" rel="noopener noreferrer"&gt;http://bit.ly/cdk-iot-sample&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Amazon CodeWhisperer is an AI-powered coding assistant that provides real-time recommendations in your IDE based on your existing code and comments. The tool is available for and can integrate well with various IDEs, including JetBrains. This post talks about my developer experience building an IoT application in about 30 minutes, with the assistance of CodeWhisperer, with some examples of what went well and what did not got well. &lt;/p&gt;

&lt;p&gt;For more information about CodeWhisperer, have a look at the &lt;a href="https://aws.amazon.com/codewhisperer/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. There is also a guide on how to get started with &lt;a href="https://docs.aws.amazon.com/codewhisperer/latest/userguide/language-ide-support.html" rel="noopener noreferrer"&gt;CodeWhisperer and JetBrains&lt;/a&gt; IDEs. &lt;/p&gt;

&lt;p&gt;To watch more such content, subscribe to &lt;a href="https://www.youtube.com/@iotbuilders" rel="noopener noreferrer"&gt;IoT Builders on YouTube&lt;/a&gt; or follow &lt;a href="https://dev.to/iotbuilders"&gt;IoT Builders&lt;/a&gt; on dev.to.&lt;/p&gt;
&lt;h2&gt;
  
  
  Author&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;


&lt;div class="ltag__user ltag__user__id__934452"&gt;
    &lt;a href="/fay_ette" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F934452%2Fc6e04565-b2d8-4af5-9f35-32989d5e7b88.jpg" alt="fay_ette image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/fay_ette"&gt;Alina Dima&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/fay_ette"&gt;An engineer for about 20 years, love solving real world problems with code. I simplify complex problems to help developer communities build better and faster with AWS IoT. &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>iot</category>
      <category>codewhisperer</category>
      <category>cdk</category>
    </item>
    <item>
      <title>Introduction to Greengrass Components - 101</title>
      <dc:creator>Ayush Aggarwal</dc:creator>
      <pubDate>Tue, 09 May 2023 16:46:05 +0000</pubDate>
      <link>https://dev.to/iotbuilders/introduction-to-greengrass-components-101-2ma5</link>
      <guid>https://dev.to/iotbuilders/introduction-to-greengrass-components-101-2ma5</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/what-is-iot-greengrass.html" rel="noopener noreferrer"&gt;AWS IoT Greengrass&lt;/a&gt; is an open source Internet of Things (IoT) edge runtime and cloud service that helps you build, deploy and manage IoT applications on your fleet of edge devices. In Greengrass parlance, IoT applications running on a Greengrass device is called a Greengrass component. Greengrass users can develop components and deploy on their edge devices for various uses such as acting locally on data generated at the edge, filtering and aggregating data generated at edge and running predictions based on machine learning models at the edge. In this blog, we will dive deeper into Greengrass components. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vbgngnk2ez9m718loo7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vbgngnk2ez9m718loo7.png" alt="AWS IoT Greengrass"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Greengrass Components ?
&lt;/h2&gt;

&lt;p&gt;AWS IoT Greengrass components are software modules that you deploy to Greengrass core devices. In more simpler words, all business logic that you want to run as code on an edge device has to be modeled as a component which Greengrass can understand, install and run. Components can represent your custom applications or AWS provided applications, runtime installers, libraries, or any code that you would run on a device. &lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Components:
&lt;/h2&gt;

&lt;p&gt;There are primarily three types of components present in the Greengrass world: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Greengrass provided Components&lt;/strong&gt; - These are components provided and maintained by AWS Greengrass to be used by customers at no charge. There are several features which AWS has identified as common across our customers’ use cases and prebuilt them. You can use these components independently without even building your custom components or you can add these components as a dependency (more on this later) on your own component and it will automatically be deployed to the edge device with your custom component. Some examples of public components are: (a) &lt;a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/stream-manager-component.html" rel="noopener noreferrer"&gt;stream manager&lt;/a&gt; - stream high volume data from edge device to the AWS cloud; (b) &lt;a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/log-manager-component.html" rel="noopener noreferrer"&gt;log manager&lt;/a&gt; - collects and uploads logs from your edge device to CloudWatch. For complete list of pre-built components by AWS Greengrass, please refer &lt;a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/public-components.html" rel="noopener noreferrer"&gt;this link&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community provided Components -&lt;/strong&gt; These are components that are built by developer community on Github under the umbrella of Amazon open source software. In order to use these components you can pull the source code, make modifications as per your use case, create your own private components and deploy to your edge devices. You can find these components through &lt;a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/greengrass-software-catalog.html" rel="noopener noreferrer"&gt;Greengrass Software Catalog&lt;/a&gt;. You can also learn about maintenance and development of these components in the &lt;a href="https://github.com/aws-greengrass/aws-greengrass-software-catalog/blob/main/CONTRIBUTING.md" rel="noopener noreferrer"&gt;Github contributing guidelines.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private Components&lt;/strong&gt; - These are the components that you develop and manage. These components are private to your AWS account i.e. no other user will have read/write access to your components. You write the software of your IoT application, for ex - an application to collect data from a temperature sensor and adjust the room’s thermostat based on certain conditions. You need to follow a series of steps (to be discussed in later blogs) to model your IoT application as a Greengrass component which can be deployed to your edge device.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The mental model to build is components are modular software applications that run on an edge device. Often our customers tend to use a combination of three types of components to build a complete IoT solution. &lt;/p&gt;

&lt;h2&gt;
  
  
  Core Concepts of a Component:
&lt;/h2&gt;

&lt;p&gt;Let’s dive into the core concepts of a component and later understand how the concepts connect with each other to create an Greengrass Component. Each component has two major parts - an Artifact and a Recipe: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Artifact&lt;/strong&gt; - Artifact is the business logic of your component. It can include scripts, compiled code, static resources, and any other files that a component consumes. You develop the application code you want to run on a Greengrass core device. Greengrass supports three locations where you can store the artifact(s) of your component: 

&lt;ol&gt;
&lt;li&gt;S3 Bucket - This S3 bucket belongs in your personal account.&lt;/li&gt;
&lt;li&gt;Lambda - You can import AWS Lambda functions as artifact of your components that run on AWS IoT Greengrass core devices. In this case you don’t need to upload your application code to S3. Scenarios in which you may choose to run lambda functions for instance if  you have application code in AWS Lambda functions that you want to deploy to core devices. Don’t worry about the details, more details on containerization of lambda functions will follow in our future blog series. &lt;/li&gt;
&lt;li&gt;Docker container - You can store your artifacts in locations like ECR if you want to run your application code in a Docker container on your edge device. More details about running a Docker container on Greengrass core device will follow in our future blog series. Examples of locations where your docker images can be stored: 

&lt;ol&gt;
&lt;li&gt;Public and private image repositories in Amazon Elastic Container Registry (Amazon ECR)&lt;/li&gt;
&lt;li&gt;Public Docker Hub repository&lt;/li&gt;
&lt;li&gt;Public Docker Trusted Registry&lt;/li&gt;
&lt;li&gt;S3 bucket (in your personal account)&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;Example of how an artifact is specified in a recipe: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"Artifacts": [
     {
        "URI": "s3://{COMPONENT_NAME}/{COMPONENT_VERSION}/HelloWorld.zip",
        "Unarchive": "ZIP"
     }
]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Recipe -&lt;/strong&gt; Component recipe is a single YAML or JSON configuration file for the component author to define runtime characteristics of your component so that Greengrass can execute and manage lifecycle of your component. You can understand recipe as a set of instructions which is consumed by Greengrass to interact with your application code, for ex - interpret how to download artifacts and install your component on the edge device, pull your component dependencies, decide if the component artifacts needs to be unzipped or installed as-is etc. More details around recipe will be dived deep in later blogs.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create Your First Component:
&lt;/h2&gt;

&lt;p&gt;In the section above, you learned about the core concepts of a component i.e. an artifact and a recipe. Now, let’s see how the two concepts can be used in a series of steps to create your first private component. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step1: Create your private component’s artifact, in this case a simple Hello-World python code
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# hello-world.py

import sys
import datetime

message = f"Hello, {sys.argv[1]}! Current time: {str(datetime.datetime.now())}."
message += " Greetings from your first Greengrass component."
# Print the message to stdout.
print(message)

# Append the message to the log file.
with open('/tmp/RatchetWorkshop_HelloWorld.log', 'a') as f:
    print(message, file=f)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Step2: Upload your artifact to an AWS S3 bucket
&lt;/h3&gt;

&lt;p&gt;Let’s use S3 as our choice to store the artifact as an example in this blog (other sources to follow soon in later blogs). You can use an existing S3 bucket or create a new one. For simplicity let’s create a new S3 bucket: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Setup AWS CLI on your machine:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Follow link based on your operating system:
" https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html "


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Create a Bucket:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ aws s3 mb s3://greengrass-component-artifacts-blog --region us-east-1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;&lt;em&gt;Output:&lt;/em&gt;&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

make_bucket: greengrass-component-artifacts-blog


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Upload the Artifact:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ aws s3 cp ~/hello-world.py s3://greengrass-component-artifacts-blog/artifacts/com.example.HelloWorld/1.0.0/hello_world.py --region us-east-1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;&lt;em&gt;Output:&lt;/em&gt;&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

./hello-world.py to s3://greengrass-component-artifacts-blog/artifacts/com.example.HelloWorld/1.0.0/hello_world.py


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Step3: Create the recipe file for your private component:
&lt;/h3&gt;

&lt;p&gt;Firstly, from the step above let’s use the S3 URI in which your artifact is uploaded which we’ll use in the recipe: &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"Artifacts": [
  {
    "URI": "s3://greengrass-component-artifacts-blog/artifacts/com.example.HelloWorld/1.0.0/hello_world.py"
  }
]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let’s create the recipe file for your custom component: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# Path: ~/GreengrassComponents/com.example.HelloWorld-1.0.0.json

{
  "RecipeFormatVersion": "2020-01-25",
  "ComponentName": "com.example.HelloWorld",
  "ComponentVersion": "1.0.0",
  "ComponentDescription": "My first AWS IoT Greengrass component.",
  "ComponentPublisher": "Your Organization Identifier",
  "ComponentConfiguration": {
    "DefaultConfiguration": {
      "Message": "world"
    }
  },
  "Manifests": [
    {
      "Platform": {
        "os": "linux"
      },
      "Lifecycle": {
        "Run": "python3 -u {artifacts:path}/hello_world.py '{configuration:/Message}'"
      },
   "Artifacts": [
        {
          "URI": "s3://greengrass-component-artifacts-blog/artifacts/com.example.HelloWorld/1.0.0/hello_world.py"
        }
      ]
    }
  ]
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Don’t worry about the data structure of the recipe file and what do different attributes mean. We’ll dive into each of them in our future blogs. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step4: Create a new private component version in AWS Greengrass-V2
&lt;/h3&gt;

&lt;p&gt;Now, that you have created the artifact and the recipe for your private component let’s create the first private component using &lt;a href="https://docs.aws.amazon.com/greengrass/v2/APIReference/API_CreateComponentVersion.html" rel="noopener noreferrer"&gt;CreateComponentVersion&lt;/a&gt; API of Greengrass-V2: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ aws greengrassv2 create-component-version --inline-recipe fileb://GreengrassComponents/com.example.HelloWorld-1.0.0.json --region us-east-1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;&lt;em&gt;Output:&lt;/em&gt;&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "arn": "arn:aws:greengrass:us-east-1:223644865437:components:com.example.HelloWorld:versions:1.0.0",
    "componentName": "com.example.HelloWorld",
    "componentVersion": "1.0.0",
    "creationTimestamp": "2023-05-03T20:51:17.517000-07:00",
    "status": {
        "componentState": "REQUESTED",
        "message": "NONE",
        "errors": {},
        "vendorGuidance": "ACTIVE",
        "vendorGuidanceMessage": "NONE"
    }
} 


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As you can see in the output above which shows ARN (Amazon Resource Name) of your private component, the component has been created. The component when first created is in a &lt;code&gt;REQUESTED&lt;/code&gt; because the Greengrass service verifies the component and if it succeeds then it changes state to &lt;code&gt;DEPLOYABLE&lt;/code&gt; which you can verify by calling &lt;a href="https://docs.aws.amazon.com/greengrass/v2/APIReference/API_DescribeComponent.html" rel="noopener noreferrer"&gt;DescribeComponent&lt;/a&gt; API: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

$ aws greengrassv2 describe-component --arn "arn:aws:greengrass:us-east-1:223644865437:components:com.example.HelloWorld:versions:1.0.0" --region us-east-1


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;em&gt;&lt;em&gt;Output:&lt;/em&gt;&lt;/em&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
    "arn": "arn:aws:greengrass:us-east-1:223644865437:components:com.example.HelloWorld:versions:1.0.0",
    "componentName": "com.example.HelloWorld",
    "componentVersion": "1.0.0",
    "creationTimestamp": "2023-05-03T20:51:17.517000-07:00",
    "publisher": "Your Organization Identifier",
    "description": "My first AWS IoT Greengrass component.",
    "status": {
        "componentState": "DEPLOYABLE",
        "message": "NONE",
        "errors": {},
        "vendorGuidance": "ACTIVE",
        "vendorGuidanceMessage": "NONE"
    },
    "platforms": [
        {
            "attributes": {
                "os": "linux"
            }
        }
    ],
    "tags": {}
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Congratulations !!&lt;/strong&gt; You just made your first private component in AWS Greengrass-V2. You can now &lt;a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/manage-deployments.html" rel="noopener noreferrer"&gt;deploy this component to your Greengrass core device&lt;/a&gt; and see it in action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;In this blog, we talked about components primarily focussing on describing “what” they are. We learnt about different types of components available in AWS IoT Greengrass-V2 and how to build a mental model to picture a software application as a component. I’m pretty sure that you’ll be curious to know - what are those different fields in the recipe, what do they mean, how will I deploy my component to a core device, what is a Greengrass core device etc. Well, those are all great questions and you don’t need to worry - our upcoming blogs in this series will dive into each of these topics and answer all your questions. &lt;br&gt;
Till then, happy coding. Please reach out if you have any questions or need help with your development, we're always happy to help.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Fleet Provisioning for Embedded Linux Devices with AWS IoT Greengrass</title>
      <dc:creator>Nenad Ilic</dc:creator>
      <pubDate>Thu, 04 May 2023 16:48:17 +0000</pubDate>
      <link>https://dev.to/iotbuilders/fleet-provisioning-for-embedded-linux-devices-with-aws-iot-greengrass-4h8b</link>
      <guid>https://dev.to/iotbuilders/fleet-provisioning-for-embedded-linux-devices-with-aws-iot-greengrass-4h8b</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Managing a large fleet of embedded devices can be complex and challenging, particularly when it comes to creating a single image that can be flashed onto multiple devices. These devices must be able to self-provision, utilizing unique information such as their serial number, upon initial boot. In this blog post, we will discuss how AWS IoT Greengrass - Fleet Provisioning can streamline this process for embedded Linux devices, making it more efficient and reliable.&lt;/p&gt;

&lt;p&gt;For embedded systems engineers experienced in Embedded Linux and Yocto, we will guide you through building a Raspberry Pi Yocto image with Greengrass with Fleet Provisioning Plugin. This ensures seamless device provisioning and management, as well as automatic registration and configuration. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rzwpi641sz8z8rhkar0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rzwpi641sz8z8rhkar0.png" alt="Fleet provisioning by claim"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please note here that the pre-provisioning lambda is optional but encouraged in order to additional layer of security. We will not be covering it in this post. You can learn more about it &lt;a href="https://docs.aws.amazon.com/iot/latest/developerguide/pre-provisioning-hook.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With the stage set, let's dive into the prerequisites for setting up AWS IoT Greengrass and Fleet Provisioning for your embedded Linux devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before diving into the process of preparing the host and configuring the Yocto image build, it's essential to set up AWS IoT Core. This involves creating policies, obtaining claim certificates, and ensuring that the AWS CLI is installed and configured. General information on how to accomplish this can be found in the &lt;a href="https://docs.aws.amazon.com/greengrass/v2/developerguide/fleet-provisioning-setup.html" rel="noopener noreferrer"&gt;AWS IoT Greengrass Developer Guide&lt;/a&gt;.&lt;br&gt;
In summary, we will need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A token exchange IAM role, which core devices use to authorize calls to AWS services and An AWS IoT role alias that points to the token exchange role.&lt;/li&gt;
&lt;li&gt;An AWS IoT fleet provisioning template. The template must specify information needed for creating thing and policy which will be attached to greengrass core device created. You can either use existing IoT policy name or define the policy on the template.&lt;/li&gt;
&lt;li&gt;An AWS IoT provisioning claim certificate and private key for the fleet provisioning template.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Devices can be manufactured with a provisioning claim certificate and private key embedded in them. When the device connects first time to AWS IoT, it uses the claim certificate to register the new device and exchange it to unique device certificate. Provisioning claim certificate needs to have AWS IoT policy attached which allows devices to register and use the fleet provisioning template.&lt;/p&gt;

&lt;p&gt;To make this process more efficient, we can utilize a CloudFormation template that automates most of these steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: "2010-09-09"

Parameters:
  ProvisioningTemplateName:
    Type: String
    Default: 'GreengrassFleetProvisioningTemplate' 
  GGTokenExchangeRoleName:
    Type: String
    Default: 'GGTokenExchangeRole'
  GGFleetProvisioningRoleName:
    Type: String
    Default: 'GGFleetProvisioningRole'
  GGDeviceDefaultPolicyName:
    Type: String
    Default: 'GGDeviceDefaultIoTPolicy'
  GGProvisioningClaimPolicyName:
    Type: String
    Default: 'GGProvisioningClaimPolicy'

Resources:

  GGTokenExchangeRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Ref GGTokenExchangeRoleName
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - credentials.iot.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      Path: '/'
      Policies:
        - PolicyName: !Sub ${GGTokenExchangeRoleName}Access
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - 'iot:DescribeCertificate'
                  - 'logs:CreateLogGroup'
                  - 'logs:CreateLogStream'
                  - 'logs:PutLogEvents'
                  - 'logs:DescribeLogStreams'
                  - 's3:GetBucketLocation'
                Resource: '*'

  GGTokenExchangeRoleAlias:
    Type: AWS::IoT::RoleAlias
    Properties:
      RoleArn: !GetAtt GGTokenExchangeRole.Arn
      RoleAlias: !Sub ${GGTokenExchangeRoleName}Alias

  GGFleetProvisioningRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: !Ref GGFleetProvisioningRoleName
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - iot.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      Path: '/'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AWSIoTThingsRegistration'

  GGDeviceDefaultPolicy:
    Type: AWS::IoT::Policy
    Properties:
      PolicyName: !Ref GGDeviceDefaultPolicyName
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Action:
            - 'iot:Connect'
            - 'iot:Publish'
            - 'iot:Subscribe'
            - 'iot:Receive'
            - 'iot:Connect'
            - 'greengrass:*'
          Resource: '*'
        - Effect: Allow
          Action:
            - 'iot:AssumeRoleWithCertificate'
          Resource: !GetAtt GGTokenExchangeRoleAlias.RoleAliasArn

  GGFleetProvisionTemplate:
    Type: AWS::IoT::ProvisioningTemplate
    Properties:
      TemplateName: !Ref ProvisioningTemplateName
      Description: 'Fleet Provisioning template for AWS IoT Greengrass.'
      Enabled: True
      ProvisioningRoleArn: !GetAtt GGFleetProvisioningRole.Arn
      TemplateBody: !Sub |+ 
        {
          "Parameters": {
            "ThingName": {
              "Type": "String"
            },
            "ThingGroupName": {
              "Type": "String"
            },
            "AWS::IoT::Certificate::Id": {
              "Type": "String"
            }
          },
          "Resources": {
            "GGThing": {
              "OverrideSettings": {
                "AttributePayload": "REPLACE",
                "ThingGroups": "REPLACE",
                "ThingTypeName": "REPLACE"
              },
              "Properties": {
                "AttributePayload": {},
                "ThingGroups": [
                  {
                    "Ref": "ThingGroupName"
                  }
                ],
                "ThingName": {
                  "Ref": "ThingName"
                }
              },
              "Type": "AWS::IoT::Thing"
            },
            "GGDefaultPolicy": {
              "Properties": {
                "PolicyName": "${GGDeviceDefaultPolicyName}"
              },
              "Type": "AWS::IoT::Policy"
            },
            "GGCertificate": {
              "Properties": {
                "CertificateId": {
                  "Ref": "AWS::IoT::Certificate::Id"
                },
                "Status": "Active"
              },
              "Type": "AWS::IoT::Certificate"
            }
          }
        }

  GGProvisioningClaimPolicy:
    Type: AWS::IoT::Policy
    Properties:
      PolicyName: !Ref GGProvisioningClaimPolicyName
      PolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Action:
            - 'iot:Connect'
          Resource: '*'
        - Effect: Allow
          Action:
            - 'iot:Publish'
            - 'iot:Receive'
          Resource: 
            - !Sub 'arn:aws:iot:${AWS::Region}:${AWS::AccountId}:topic/$aws/certificates/create/*'
            - !Sub 'arn:aws:iot:${AWS::Region}:${AWS::AccountId}:topic/$aws/provisioning-templates/${ProvisioningTemplateName}/provision/*'
        - Effect: Allow
          Action:
            - 'iot:Subscribe'
          Resource:
            - !Sub 'arn:aws:iot:${AWS::Region}:${AWS::AccountId}:topicfilter/$aws/certificates/create/*'
            - !Sub 'arn:aws:iot:${AWS::Region}:${AWS::AccountId}:topicfilter/$aws/provisioning-templates/${ProvisioningTemplateName}/provision/*'

Outputs:

  GGTokenExchangeRole:
    Description: Name of token exchange role.
    Value: !Ref GGTokenExchangeRole
  GGTokenExchangeRoleAlias:
    Description: Name of token exchange role alias.
    Value: !Ref GGTokenExchangeRoleAlias
  GGFleetProvisionTemplate:
    Description: Name of Fleet provisioning template.
    Value: !Ref GGFleetProvisionTemplate
  GGProvisioningClaimPolicy:
     Description: Name of claim certificate IoT policy.
     Value: !Ref GGProvisioningClaimPolicy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file and create CloudFormation stack from template.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudformation create-stack --stack-name GGFleetProvisoning --template-body file://gg-fp.yaml --capabilities CAPABILITY_NAMED_IAM
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait few minutes for resources being created. You can check status from CloudFormation console or with command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudformation describe-stacks --stack-name GGFleetProvisoning
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create claim certificate
&lt;/h3&gt;

&lt;p&gt;These we will be embedded in our RPi SD Card Image and used to provision our devices.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir claim-certs

export CERTIFICATE_ARN=$(aws iot create-keys-and-certificate \
    --certificate-pem-outfile "claim-certs/claim.cert.pem" \
    --public-key-outfile "claim-certs/claim.pubkey.pem" \
    --private-key-outfile "claim-certs/claim.pkey.pem" \
    --set-as-active \
    --query certificateArn)

curl -o "claim-certs/claim.root.pem" https://www.amazontrust.com/repository/AmazonRootCA1.pem

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Attach the AWS IoT policy to the provisioning claim certificate
&lt;/h3&gt;

&lt;p&gt;As we created IoT policy named &lt;code&gt;GGProvisioningClaimPolicy&lt;/code&gt; with CloudFormation we can just use the name to attach the policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iot attach-policy --policy-name GGProvisioningClaimPolicy --target ${CERTIFICATE_ARN//\"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a Thing Group
&lt;/h3&gt;

&lt;p&gt;Once our devices get provisioned they will become part of this Thing Group allowing us later to target Thing Group Fleet Deployments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iot create-thing-group --thing-group-name EmbeddedLinuxFleet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As of now we should be good to go and build our RPI image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building RPi Image
&lt;/h2&gt;

&lt;p&gt;Building a Yocto image for Raspberry Pi requires several steps, including setting up the build environment, cloning the necessary repositories, configuring the build, and finally, building the image itself. Here's a step-by-step guide to help you through the process:&lt;/p&gt;

&lt;p&gt;Open a terminal window on your workstation which has all the prerequisits based on the &lt;a href="https://docs.yoctoproject.org/brief-yoctoprojectqs/index.html" rel="noopener noreferrer"&gt;Yocto Project Build Doc&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;NOTE For the sake of this tutorial, the variable &lt;code&gt;BASE&lt;/code&gt; refers to the build environment parent directory. Here, this will be set to &lt;code&gt;$HOME&lt;/code&gt;. If you are using another partition as the base directory, please set it accordingly.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export BASEDIR=$(pwd)
export DIST=poky-rpi4
export B=kirkstone
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clone the Poky base layer to include OpenEmbedded Core, Bitbake, and so forth to seed the Yocto build environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone -b $B git://git.yoctoproject.org/poky.git $BASEDIR/$DIST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clone additional dependent repositories. Note that we are cloning only what is required for AWS IoT Greengrass.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone -b $B git://git.openembedded.org/meta-openembedded \
    $BASEDIR/$DIST/meta-openembedded
git clone -b $B git://git.yoctoproject.org/meta-raspberrypi \
    $BASEDIR/$DIST/meta-raspberrypi
git clone -b $B git://git.yoctoproject.org/meta-virtualization \
    $BASEDIR/$DIST/meta-virtualization
git clone -b $B https://github.com/aws4embeddedlinux/meta-aws \
    $BASEDIR/$DIST/meta-aws
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Source the Yocto environment script. This seeds the &lt;code&gt;build/conf&lt;/code&gt; directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd $BASEDIR/$DIST
. ./oe-init-build-env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add necessary layers to &lt;code&gt;bblayers.conf&lt;/code&gt; using &lt;code&gt;bitbake-layer add-layer&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bitbake-layers add-layer ../meta-openembedded/meta-oe
bitbake-layers add-layer ../meta-openembedded/meta-python
bitbake-layers add-layer ../meta-openembedded/meta-filesystems
bitbake-layers add-layer ../meta-openembedded/meta-networking
bitbake-layers add-layer ../meta-virtualization
bitbake-layers add-layer ../meta-raspberrypi
bitbake-layers add-layer ../meta-aws
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure the &lt;code&gt;local.conf:&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here it is important to note that apart from standard raspberry pi configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MACHINE ?= "raspberrypi4-64"

DISABLE_VC4GRAPHICS = "1"

# Parallelism Options
BB_NUMBER_THREADS ?= "${@oe.utils.cpu_count()}"
PARALLEL_MAKE ?= "-j ${@oe.utils.cpu_count()}"

# Additional image features
USER_CLASSES ?= "buildstats"

# By default disable interactive patch resolution (tasks will just fail instead):
PATCHRESOLVE = "noop"

# Disk Space Monitoring during the build
BB_DISKMON_DIRS = "\
    STOPTASKS,${TMPDIR},1G,100K \
    STOPTASKS,${DL_DIR},1G,100K \
    STOPTASKS,${SSTATE_DIR},1G,100K \
    HALT,${TMPDIR},100M,1K \
    HALT,${DL_DIR},100M,1K \
    HALT,${SSTATE_DIR},100M,1K"

CONF_VERSION = "2"

DISTRO_FEATURES += "systemd"
DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit"
VIRTUAL-RUNTIME_init_manager = "systemd"
VIRTUAL-RUNTIME_initscripts = ""

IMAGE_FSTYPES = "rpi-sdimg"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We should focus on our Greengrass FleetProvisioning configuration part, which should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
IMAGE_INSTALL:append = " greengrass-bin "
GGV2_DATA_EP     = "xxx-ats.iot.&amp;lt;your aws region&amp;gt;.amazonaws.com"
GGV2_CRED_EP     = "xxx.iot.&amp;lt;your aws region&amp;gt;.amazonaws.com"
GGV2_REGION      = "&amp;lt;your aws region&amp;gt;"
GGV2_THING_NAME  = "ELThing"
GGV2_TES_RALIAS  = "GGTokenExchangeRoleAlias" # we got this from the cloudformation
GGV2_THING_GROUP = "EmbeddedLinuxFleet"

PACKAGECONFIG:pn-greengrass-bin = "fleetprovisioning"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here it is important to note that we are adding &lt;code&gt;greengrass-bin&lt;/code&gt; to our image and then providing additional configuration required by the &lt;code&gt;config.yaml&lt;/code&gt; as well as adding &lt;code&gt;PACKAGECONFIG:pn-greengrass-bin = "fleetprovisioning"&lt;/code&gt; in order to enable the functionality.&lt;/p&gt;

&lt;p&gt;In order to get the AWS region and the IoT endpoints we can do the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "GGV2_REGION="$(aws configure get region)
echo "GGV2_DATA_EP="$(aws --output text iot describe-endpoint \
    --endpoint-type iot:Data-ATS \
    --query 'endpointAddress')
echo "GGV2_CRED_EP="$(aws --output text iot describe-endpoint \
    --endpoint-type iot:CredentialProvider \
    --query 'endpointAddress')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note that we will need a unique Thing Name to be generated for each device, so here the Thing Name is taken as a prefix and there is script inside of a &lt;code&gt;greengreass-bin&lt;/code&gt; recipe that appends the unique device id to the Thing Name using the MAC address.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh
file_path="$1"
default_iface=$(busybox route | grep default | awk '{print $8}')
mac_address=$(busybox ifconfig "$default_iface" | grep -o -E '([[:xdigit:]]{1,2}:){5}[[:xdigit:]]{1,2}' | tr ':' '_')
sed -i "s/&amp;lt;unique&amp;gt;/$mac_address/g" "$file_path"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;meta-aws
└── recipes-iot
    └── aws-iot-greengrass
        └──files
            └── replace_board_id.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Feel free to replace this file with any other way of obtaining the uniqueness such as serial number or similar &lt;/p&gt;

&lt;p&gt;Finally we should copy our claim credentials we generated at the beginning to be included in our build:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp  "claim-certs/claim.cert.pem" \
    "claim-certs/claim.pkey.pem" \
    "claim-certs/claim.root.pem" \
    $BASEDIR/$DIST/meta-aws/recipes-iot/aws-iot-greengrass/files/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please adjust the paths based on the location of generated certs and the recipe.&lt;/p&gt;

&lt;p&gt;After all of this we should proceed with building our image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bitbake core-image-minimal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;⌛ Couple of hours later ⌛ the build should be complete, and we can find the resulting image in the following directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ls tmp/deploy/images/raspberrypi4-64/*sdimg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To flash the image onto an SD card, use a tool like 'dd'&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dd if=tmp/deploy/images/raspberrypi4-64/core-image-minimal-raspberrypi4-64.sdimg of=/dev/sdX bs=4M
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where we need to make sure to replace "/dev/sdX" with the appropriate device identifier for the SD card.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ Please double check the SD card identifier as a mistake here can wipe your workstation system&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Powering the Device for the First Time
&lt;/h2&gt;

&lt;p&gt;Once the SD card is reinserted into the Raspberry Pi with power and internet connected, the device should perform provisioning and appear in the list of Greengrass core devices.:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws greengrassv2 list-core-devices

{
    "coreDevices": [
        {
            "coreDeviceThingName": "ELThing_11_22_33_44_55_60",
            "status": "HEALTHY",
            "lastStatusUpdateTimestamp": "2023-04-25T15:39:00.703000+00:00"
        },
       {
            "coreDeviceThingName": "ELThing_11_22_33_44_55_61",
            "status": "HEALTHY",
            "lastStatusUpdateTimestamp": "2023-03-31T03:11:17.911000+00:00"
        },
       {
            "coreDeviceThingName": "ELThing_11_22_33_44_55_62",
            "status": "HEALTHY",
            "lastStatusUpdateTimestamp": "2023-02-25T15:17:29.505000+00:00"
        },        
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success! &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To sum it up, managing a large fleet of embedded Linux devices can be a complex and challenging task, especially when it comes to creating a single image that can be flashed onto multiple devices. However, AWS IoT Greengrass with Fleet Provisioning can streamline this process and make it more efficient and reliable. In this blog post, we have discussed the prerequisites for setting up AWS IoT Greengrass and Fleet Provisioning, including creating policies, obtaining claim certificates, and configuring the Yocto image build. We have also provided a step-by-step guide to building a Yocto image for Raspberry Pi with Greengrass Fleet Provisioning configuration. Using AWS IoT Greengrass - Fleet Provisioning, managing a large fleet of embedded devices can be made easier, more efficient, and secure.&lt;/p&gt;

&lt;p&gt;If you have any feedback about this post, or you would like to see more related content, please reach out to me here, or on &lt;a href="https://twitter.com/nenadilic84" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/nenadilic84/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Feel free to checkout this video that goes over the mentioned setup: &lt;a href="https://youtu.be/Eeo7GLVr0jw" rel="noopener noreferrer"&gt;https://youtu.be/Eeo7GLVr0jw&lt;/a&gt;&lt;/p&gt;

</description>
      <category>greengrass</category>
      <category>iot</category>
      <category>yocto</category>
    </item>
    <item>
      <title>Automatically Applying Configuration to IoT Devices with AWS IoT and AWS Step Functions - Part 1</title>
      <dc:creator>Alina Dima</dc:creator>
      <pubDate>Thu, 06 Apr 2023 15:59:23 +0000</pubDate>
      <link>https://dev.to/iotbuilders/automatically-applying-configuration-to-iot-devices-with-aws-iot-and-aws-step-functions-part-1-4n13</link>
      <guid>https://dev.to/iotbuilders/automatically-applying-configuration-to-iot-devices-with-aws-iot-and-aws-step-functions-part-1-4n13</guid>
      <description>&lt;h2&gt;
  
  
  Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Assumptions&lt;/li&gt;
&lt;li&gt;Tools and Services&lt;/li&gt;
&lt;li&gt;What workflow are we building?&lt;/li&gt;
&lt;li&gt;How it works&lt;/li&gt;
&lt;li&gt;Why this Developer Technique&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;li&gt;Author&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Introduction&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Explicitly defining processes involving IoT devices and externalizing their management and orchestration is a useful mechanism to break down complexity, centralize state management, ensure bounded context and encapsulation in your system design. It is also a useful technique for modelling workflows that must survive external factors which impact IoT device operation in the field, such as intermittent connectivity, reboots or battery loss, causing temporary offline behavior.&lt;/p&gt;

&lt;p&gt;In this blog post series, we will look at a simple example of modeling an IoT device process as a workflow, using primarily &lt;a href="https://aws.amazon.com/iot/" rel="noopener noreferrer"&gt;AWS IoT&lt;/a&gt; and &lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;AWS Step Functions&lt;/a&gt;. Our example is a system where, when a device comes online, you need to get external settings based on the profile of the user the device belongs to and push that configuration to the device. The system that holds the external settings is often a third party system you need to integrate with at API level (HTTPS), based on the API specification and system SLAs. &lt;/p&gt;

&lt;p&gt;A real world example is home automation. A user sets up a default desired temperature for his apartment, based on a User Settings Profile they created when they purchased the gateway responsible for managing the different sensors, including temperature. &lt;/p&gt;

&lt;h2&gt;
  
  
  Assumptions&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To narrow down this implementation example, here is a list of assumptions: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We refer to the gateway that controls the different sensors as the device. &lt;/li&gt;
&lt;li&gt;The workflow starts every time a device comes online. &lt;/li&gt;
&lt;li&gt;Nothing happens when devices disconnect. The profile is checked only upon reconnects. &lt;/li&gt;
&lt;li&gt;AWS IoT Core interacts only with the gateway, and not with individual sensors. The gateway reports back to confirm the settings have been correctly configured by the respective sensors. &lt;/li&gt;
&lt;li&gt;We will simulate the device behavior using an MQTT client application built with the &lt;a href="https://github.com/mqttjs/MQTT.js" rel="noopener noreferrer"&gt;MQTT.js open-source library&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Note that: &lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is a sample implementation to get you started. It is not meant to be lifted and shifted for a production environment. It can definitely be used as a starting point. &lt;/li&gt;
&lt;li&gt;The focus point in this solution is on development techniques and not on the type of device or type of configuration that needs to be applied. Therefore, this solution simulates the device device behavior, the config, and the external API. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tools and Services&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;To build the above described scenario, we use the following technologies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Step Functions for workflow management and execution. The state machine is available in &lt;a href="https://github.com/aws-iot-builder-tools/iot-config-management-sample/blob/main/statemachine/config_management.asl.json" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. The following AWS Step Function features are used in the sample project:

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-task-state.html" rel="noopener noreferrer"&gt;Tasks&lt;/a&gt; - to break down each unit of work.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/amazon-states-language-choice-state.html" rel="noopener noreferrer"&gt;Choice state&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/supported-services-awssdk.html#use-awssdk-integ" rel="noopener noreferrer"&gt;AWS SDK service integrations&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html#connect-wait-token" rel="noopener noreferrer"&gt;Wait for a Callback with the Task Token&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;AWS IoT Core and AWS IoT Rules Engine for the communication with the IoT Device over MQTT and routing of messages from the device and automatically triggering actions based on message content. &lt;/li&gt;

&lt;li&gt;Amazon DynamoDB for storing of Task Tokens needed for the callback service integration pattern of AWS Step Functions. &lt;/li&gt;

&lt;li&gt;AWS Lambda for publishing messages to the device. AWS Lambda is also used for processing responses to temperature set requests from the IoT Device, as well as simulating the Third Party system call to retrieve the user default desired temperature. &lt;/li&gt;

&lt;li&gt;The device simulator is build in JavaScript, using the open-source Node.js MQTT.js client library. &lt;/li&gt;

&lt;li&gt;All the AWS resources are created and deployed using AWS SAM. The AWS SAM template is available in &lt;a href="https://github.com/aws-iot-builder-tools/iot-config-management-sample/blob/main/template.yaml" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;You can have a look at the GitHub repository for the entire solution, here: &lt;a href="https://github.com/aws-iot-builder-tools/iot-config-management-sample" rel="noopener noreferrer"&gt;https://github.com/aws-iot-builder-tools/iot-config-management-sample&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What workflow are we building?&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The flow is as follows: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every time the device comes online, the workflow for retrieving the user profile, to understand their desired room temperature kicks off. The desired temperature is stored in the user profile, available via an API call to an external system. We simulate this system call using an AWS Lambda function. &lt;/li&gt;
&lt;li&gt;The workflow retrieves the external ID connecting the device to the user it belongs to from IoT thing attributes. It then makes the call to the third party system to retrieve the user profile with the desired temperature value, publishes a message with the configuration setting to the MQTT topic the device is subscribed to, and enters a callback Task, which pauses waiting for the event from the device to come back on the configured response MQTT topic. &lt;/li&gt;
&lt;li&gt;Upon receiving and parsing the response, success or failure is determined, the paused Task is terminated on callback with the respective result, and the AWS Step Functions workflow terminates. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Building an automatically triggered workflow contributes to a nice user experience. There is no need for manual intervention to synchronize the IoT device with user specific configuration settings. This is a simple example to introduce the concept. The workflow can get more complex, with more settings and more involved systems. &lt;/p&gt;

&lt;h2&gt;
  
  
  How it works&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Fig. 1 below shows the interactions between the different components. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fev0sf82h2ga6r1450odz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fev0sf82h2ga6r1450odz.png" alt="How it Works"&gt;&lt;/a&gt; Fig. 1 - &lt;em&gt;How it works diagram&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The interactions can be described as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The device comes online and an MQTT presence event is published on the AWS topic &lt;code&gt;$aws/events/presence/connected/+&lt;/code&gt;. (&lt;strong&gt;1&lt;/strong&gt; in Fig. 1 ) &lt;/li&gt;
&lt;li&gt;An AWS IoT Rule is configured on the above topic, with the Rule Action being an AWS Step Function State Machine execution trigger. The Rule SQL looks as follows: &lt;code&gt;SELECT *, topic(5) AS thingName FROM $aws/events/presence/connected/+&lt;/code&gt;
(&lt;strong&gt;2&lt;/strong&gt; in Fig. 1 )&lt;/li&gt;
&lt;li&gt;On MQTT connect presence event, the Rule will pick up the event and pass it as input to the state machine, which will be started on Rule execution (&lt;strong&gt;3&lt;/strong&gt; in Fig. 1 ). The workflow steps will execute as follows: 

&lt;ul&gt;
&lt;li&gt;Firstly, the &lt;strong&gt;IoT DescribeThing SDK call&lt;/strong&gt; will be made to retrieve the thing from the Device Registry. The external ID thing attribute will be retrieved and passed on to the next step. Note that the &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/supported-services-awssdk.html#use-awssdk-integ" rel="noopener noreferrer"&gt;AWS Step Functions SDK integration&lt;/a&gt; is used at this step.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3Party_GetUserProfileForDevice&lt;/strong&gt; is executed as a next step. This is an AWS Lambda function which simulates a call to a third party system. 3Party_GetUserProfileForDevice returns a randomly generated desired temperature value for an external ID and thing name. (&lt;strong&gt;4&lt;/strong&gt; in the Fig. 1 ).&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;SetConfigurationOnDevice&lt;/strong&gt; AWS Lambda function uses the IoTDataPlaneClient to publish a message to the IoT Device on the MQTT request topic with the desired temperature value. (&lt;strong&gt;5&lt;/strong&gt; in Fig. 1 ), and go next into a “Wait for Callback” Task, waiting for confirmation of the configuration setting from the device.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;“Wait for Callback” Task&lt;/strong&gt; integrates an AWS DynamoDB PutItem SDK call. A task token is generated by AWS Step Functions and sent into the Task as input. We store the token so that we can identify and correlate which token corresponds to the request the device sends the response for. The token is stored in an Amazon DynamoDB table, together with the name of the operation (in this case &lt;code&gt;SET_TEMP&lt;/code&gt;), a unique operation ID (passed as input from the Rules Engine MQTT event session ID ), the device ID (thing name) and a status value of &lt;code&gt;ACTIVE&lt;/code&gt;. The DynamoDB table is configuration is shown below:
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  CMTaskTokens:
    Type: AWS::DynamoDB::Table
    Properties:
      TableName:  CMTaskTokens
      AttributeDefinitions:
        - AttributeName: operationId
          AttributeType: S
        - AttributeName: token
          AttributeType: S

      KeySchema:
        - AttributeName: operationId
          KeyType: HASH
        - AttributeName: token
          KeyType: RANGE
      BillingMode: PAY_PER_REQUEST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;The device receives the request (&lt;strong&gt;6&lt;/strong&gt; in Fig. 1 ), attempts the operation, and sends the event confirming the result on the MQTT response topic (&lt;strong&gt;7&lt;/strong&gt; in Fig. 1 ). An IoT Rule picks up the incoming event, and invokes an AWS Lambda Function (&lt;strong&gt;8&lt;/strong&gt; in Fig. 1 ) which : 

&lt;ul&gt;
&lt;li&gt;Parses the response and establishes success or failure.&lt;/li&gt;
&lt;li&gt;Retrieves the token entry based on the operation ID from the database table and uses it to terminate the AWS Step Function &lt;strong&gt;WaitForConfirmationEventFromDevice&lt;/strong&gt; state with either success or failure.(&lt;strong&gt;9&lt;/strong&gt; in Fig. 1 ).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The result from the evaluation of the MQTT response event from the IoT device is passed further into a Choice Task, which determines state machine termination with with either success or failure.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  How to Deploy and Run this in your AWS Account
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Pre-requisites:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Clone the GitHub repo: &lt;a href="https://github.com/aws-iot-builder-tools/iot-config-management-sample" rel="noopener noreferrer"&gt;iot-config-management-sample&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the root directory, &lt;code&gt;iot-config-management-sample&lt;/code&gt;, run the following commands:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      sam build 
      sam deploy --guided 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Start the device simulator: 

&lt;ul&gt;
&lt;li&gt;Add and validate your configuration in &lt;code&gt;iot-config-management-sample/device/config.js&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export const config =
{
    iotEndpoint: '&amp;lt;YOUR IoT Endpoint&amp;gt;',
    clientId: '&amp;lt;YOUR Thing Name&amp;gt;',
    policyName: '&amp;lt;YOUR IoT Policy&amp;gt;',
    verbosity: 'warn',
    region: '&amp;lt;YOUR AWS Region&amp;gt;',
    shouldCreateThingAndIdentity: &amp;lt;true or false&amp;gt; // if true, the simulator will create the AWS IoT Thing and unique identity. Certificate and Key will be stored in certs/
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If your &lt;code&gt;shouldCreateThingAndIdentity&lt;/code&gt; flag is set to &lt;code&gt;false&lt;/code&gt;, you need to make sure the IoT thing, certificate and key have already been created, and store the certificate and key in the certs folder prior to running the MQTT client. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start the simulator: &lt;code&gt;node simulator.js&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;If the simulator is successfully started, you should already see that an AWS Step Functions workflow was triggered automatically, once your device successfully connected to AWS IoT. &lt;u&gt;&lt;em&gt;Note&lt;/em&gt;&lt;/u&gt; that the simulator implements a delay of &lt;code&gt;10 seconds&lt;/code&gt; between receiving the request and sending a successful response.&lt;/li&gt;
&lt;li&gt;Verify that the workflow was successfully triggered: 

&lt;ul&gt;
&lt;li&gt;Log into the AWS Console, navigate to AWS Step Functions, find the &lt;code&gt;ConfigManagement&lt;/code&gt; State Machine, and explore the execution, as shown in the 2 images below.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9582tr5hotatp3wmbks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9582tr5hotatp3wmbks.png" alt="Waiting for MQTT response from the device"&gt;&lt;/a&gt; Fig. 2 - &lt;em&gt;Waiting for MQTT response from the device&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwz5c08ifb0idru7bgcc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwz5c08ifb0idru7bgcc.png" alt="Successful workflow execution"&gt;&lt;/a&gt;Fig 3- &lt;em&gt;Successful workflow execution&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why this Developer Technique? &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The advantages of modeling the workflow using AWS Step Functions are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The workflow is explicitly defined and exists outside of various system interactions or the IoT device. &lt;/li&gt;
&lt;li&gt;Flexibility in configuring retries, error handling and timeouts, at each step:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DescribeThing&lt;/strong&gt; can run into TPS limits caused by too many concurrent calls to AWS IoT. You can configure your state machine to retry this step, with an exponential back-off strategy. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Calls to the Third Party systems&lt;/strong&gt; can fail with various errors, some of which are worth retrying on. Exception Handling and Retries can be configured at Task level.&lt;/li&gt;
&lt;li&gt;If the &lt;strong&gt;device goes offline temporarily&lt;/strong&gt; during the workflow execution for whatever reason, you can configure a reasonable &lt;a href="https://docs.aws.amazon.com/step-functions/latest/apireference/API_SendTaskHeartbeat.html" rel="noopener noreferrer"&gt;Heartbeat&lt;/a&gt; on the Callback Task. The workflow then pauses until either a message from the device arrives or the timeout is reached. In this way, workflows can exist in the cloud outside of whatever is going on with the device, for as long as needed.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;AWS Step Functions &lt;strong&gt;centralizes state management&lt;/strong&gt; for each step in the workflow, as well as the workflow as a whole. Tracing, logging and metrics  are available as execution level, and at task level. These observability features, as well as the integration with AWS X-Ray and Amazon CloudWatch will be covered in Part 2 of this blog series. &lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Conclusion&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In the first part of this blog series, we looked at modelling and building the process of automatically applying configuration to an IoT device, based on default settings in a user profile, using AWS Steps Functions, AWS IoT, Amazon Dynamo DB, AWS Lambda and the open-source MQTT.js client library. This is a developer technique used to reduce complexity in workflows involving IoT devices. To understand more about the implementation, have a look at the code in GitHub. &lt;/p&gt;

&lt;p&gt;For a more complex example of an IoT Workflow, you can have a look at the firmware upgrade (OTA) flow built using AWS IoT and AWS Step Functions in the &lt;a href="https://github.com/aws-iot-builder-tools/iot-workflow-management-and-execution" rel="noopener noreferrer"&gt;iot-workflow-management-and-execution GitHub repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you are curious to learn more about IoT workflow orchestration, watch our IoT Builders YouTube episodes: &lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/JcTd-hx90PY"&gt;
&lt;/iframe&gt;
&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/3H4I09ZsEws"&gt;
&lt;/iframe&gt;
 &lt;/p&gt;

&lt;p&gt;Stay tuned for the next blog of the series, where we explore in detail observability aspects such as logging, tracing and metrics for this workflow. &lt;/p&gt;

&lt;p&gt;If you have any feedback about this post, or you would like to see more related content, please reach out to me here, or on &lt;a href="https://twitter.com/fay_ette" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/alinadima/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. &lt;/p&gt;
&lt;h2&gt;
  
  
  Author&lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;


&lt;div class="ltag__user ltag__user__id__934452"&gt;
    &lt;a href="/fay_ette" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F934452%2Fc6e04565-b2d8-4af5-9f35-32989d5e7b88.jpg" alt="fay_ette image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/fay_ette"&gt;Alina Dima&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/fay_ette"&gt;An engineer for about 20 years, love solving real world problems with code. I simplify complex problems to help developer communities build better and faster with AWS IoT. &lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



</description>
      <category>awsiot</category>
      <category>awsstepfunctions</category>
      <category>mqttjs</category>
      <category>orchestration</category>
    </item>
    <item>
      <title>Orchestrating Application Workloads in Distributed Embedded Systems: Writing and Scaling a Pub Application - Part 2</title>
      <dc:creator>Nenad Ilic</dc:creator>
      <pubDate>Thu, 30 Mar 2023 19:23:24 +0000</pubDate>
      <link>https://dev.to/iotbuilders/orchestrating-application-workloads-in-distributed-embedded-systems-writing-and-scaling-a-pub-application-part-2-4je8</link>
      <guid>https://dev.to/iotbuilders/orchestrating-application-workloads-in-distributed-embedded-systems-writing-and-scaling-a-pub-application-part-2-4je8</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;This blog covers the second part of &lt;a href="https://dev.to/iotbuilders/orchestrating-application-workloads-in-distributed-embedded-systems-setting-up-a-nomad-cluster-with-aws-iot-greengrass-part-1-1bdi"&gt;Orchestrating Application Workloads in Distributed Embedded Systems&lt;/a&gt;. We will go over how to expose Greengrass IPC in a Nomad cluster and have containerized applications publishing metrics to AWS IoT Core using the Greengrass IPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwgdkooi1zamyrmp3r7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwgdkooi1zamyrmp3r7r.png" alt="Architecture Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;It is essential to follow the first part of the blog, which covers bootstrapping devices with AWS IoT Greengrass and HashiCorp Nomad. Once that is done, we can jump into the application part and the required configuration. As a reminder, the source code is located here: &lt;a href="https://github.com/aws-iot-builder-tools/greengrass-nomad-demo" rel="noopener noreferrer"&gt;https://github.com/aws-iot-builder-tools/greengrass-nomad-demo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Greengrass IPC Proxy
&lt;/h2&gt;

&lt;p&gt;In order for applications to access Greengrass IPC, we need to create a proxy. We will use &lt;code&gt;socat&lt;/code&gt; to forward the &lt;code&gt;ipc.socket&lt;/code&gt; via the network (TCP) and then use &lt;code&gt;socat&lt;/code&gt; on the application side to create an &lt;code&gt;ipc.socket&lt;/code&gt; file. The example can be found under &lt;code&gt;ggv2-nomad-setup/ggv2-proxy/ipc/recipe.yaml&lt;/code&gt;. Here we deploy the Nomad job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;job "ggv2-server-ipc" {&lt;/span&gt;
    &lt;span class="s"&gt;datacenters = ["dc1"]&lt;/span&gt;
    &lt;span class="s"&gt;type = "system"&lt;/span&gt;
    &lt;span class="s"&gt;group "server-ipc-group" {&lt;/span&gt;

        &lt;span class="s"&gt;constraint {&lt;/span&gt;
            &lt;span class="s"&gt;attribute = "\${meta.greengrass_ipc}"&lt;/span&gt;
            &lt;span class="s"&gt;operator  = "="&lt;/span&gt;
            &lt;span class="s"&gt;value     = "server"&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;

        &lt;span class="s"&gt;network {&lt;/span&gt;
            &lt;span class="s"&gt;port "ipc_socat" {&lt;/span&gt;
                &lt;span class="s"&gt;static = &lt;/span&gt;&lt;span class="m"&gt;3307&lt;/span&gt;
            &lt;span class="err"&gt;}&lt;/span&gt;
        &lt;span class="err"&gt;}&lt;/span&gt;
        &lt;span class="s"&gt;service {&lt;/span&gt;
            &lt;span class="s"&gt;name = "ggv2-server-ipc"&lt;/span&gt;
            &lt;span class="s"&gt;port = "ipc_socat"&lt;/span&gt;
            &lt;span class="s"&gt;provider = "nomad"&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;

        &lt;span class="s"&gt;task "server-ipc-task" {&lt;/span&gt;
            &lt;span class="s"&gt;driver = "raw_exec"&lt;/span&gt;
            &lt;span class="s"&gt;config {&lt;/span&gt;
                &lt;span class="s"&gt;command = "socat"&lt;/span&gt;
                &lt;span class="s"&gt;args = [&lt;/span&gt;
                    &lt;span class="s"&gt;"TCP-LISTEN:3307,fork,nonblock",&lt;/span&gt;
                    &lt;span class="s"&gt;"UNIX-CONNECT:$AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT,nonblock"&lt;/span&gt;
                &lt;span class="s"&gt;]&lt;/span&gt;
            &lt;span class="s"&gt;}&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This job will use the &lt;code&gt;AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT&lt;/code&gt; provided by the Greengrass Component deployment and run a &lt;code&gt;socat&lt;/code&gt; command by connecting to the defined socket and exposing it over TCP on a reserved port 3307. Note that the deployment of this job will have constraints and will only target the devices tagged as &lt;code&gt;greengrass_ipc=server&lt;/code&gt;, as this is intended to be deployed only on a client where Greengrass is running.&lt;/p&gt;

&lt;p&gt;To deploy this to our Greengrass device, we will use the same methods from the previous blog post. Which should look something like this:&lt;/p&gt;

&lt;p&gt;Start with building and publishing the component by doing &lt;code&gt;gdk build&lt;/code&gt; and &lt;code&gt;gdk publish&lt;/code&gt;, making sure you are in the &lt;code&gt;ggv2-nomad-setup/ggv2-proxy/ipc/&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;Additionally, to deploy this to the targets, we will need to add this to a &lt;code&gt;deployment.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        "ggv2.nomad.proxy.ipc": {
            "componentVersion": "1.0.0",
            "runWith": {}
        }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;respecting the name of the component and the version provided by the GDK.&lt;/p&gt;

&lt;p&gt;After that executing the command below will deploy it to our target:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws greengrassv2 create-deployment \
    --cli-input-json file://deployment.json\
    --region ${AWS_REGION}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the command executes successfully, we will be ready to move forward with our application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Overview
&lt;/h2&gt;

&lt;p&gt;We will have a simple application written in python that publishes information about used memory and CPU and publishes this information using the Greengrass IPC to AWS IoT Core. The topic here is constructed by having &lt;code&gt;NOMAD_SHORT_ALLOC_ID&lt;/code&gt; as a prefix followed by &lt;code&gt;/iot/telemetry.&lt;/code&gt; We will use this info later once we scale the application across the cluster and start receiving messages on multiple MQTT topics.&lt;/p&gt;

&lt;p&gt;Here is the Python code for the application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import time
import os

import awsiot.greengrasscoreipc
import awsiot.greengrasscoreipc.model as model


NOMAD_SHORT_ALLOC_ID = os.getenv('NOMAD_SHORT_ALLOC_ID')

def get_used_mem():
    with open('/proc/meminfo', 'r') as f:
        for line in f:
            if line.startswith('MemTotal:'):
                total_mem = int(line.split()[1]) * 1024  # convert to bytes
            elif line.startswith('MemAvailable:'):
                available_mem = int(line.split()[1]) * 1024  # convert to bytes
                break

    return total_mem - available_mem

def get_cpu_usage():
    with open('/proc/stat', 'r') as f:
        line = f.readline()
        cpu_time = sum(map(int, line.split()[1:]))
        idle_time = int(line.split()[4])

    return (cpu_time - idle_time) / cpu_time

if __name__ == '__main__':
    ipc_client = awsiot.greengrasscoreipc.connect()

    while True:
        telemetry_data = {
            "timestamp": int(round(time.time() * 1000)),
            "used_memory": get_used_mem(),
            "cpu_usage": get_cpu_usage()
        }

        op = ipc_client.new_publish_to_iot_core()
        op.activate(model.PublishToIoTCoreRequest(
            topic_name=f"{NOMAD_SHORT_ALLOC_ID}/iot/telemetry",
            qos=model.QOS.AT_LEAST_ONCE,
            payload=json.dumps(telemetry_data).encode(),
        ))
        try:
            result = op.get_response().result(timeout=5.0)
            print("successfully published message:", result)
        except Exception as e:
            print("failed to publish message:", e)

        time.sleep(5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The application can be found under &lt;code&gt;examples/nomad/nomad-docker-pub/app.py.&lt;/code&gt; On top of this we will be using a &lt;code&gt;Dokerfile&lt;/code&gt; to containerized. In order for this to work with GDK, we will be using &lt;code&gt;build_system: "custom"&lt;/code&gt; and specify the script for building and publishing the image to ECR:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "component": {
    "nomad.docker.pub": {
      "author": "Nenad Ilic",
      "version": "NEXT_PATCH",
      "build": {
        "build_system": "custom",
        "custom_build_command": [
          "./build.sh"
         ]
      },
      "publish": {
        "bucket": "greengrass-component-artifacts",
        "region": "eu-west-1"
      }
    }
  },
  "gdk_version": "1.1.0"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;code&gt;build.sh&lt;/code&gt; will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
set -e
AWS_ACCOUNT_ID=$(aws sts get-caller-identity |  jq -r '.Account')
AWS_REGION=$(jq -r '.component | to_entries[0] | .value.publish.region' gdk-config.json)
COMPONENT_NAME=$(jq -r '.component | keys | .[0]' gdk-config.json)
COMPONENT_AUTHOR=$(jq -r '.component | to_entries[0] | .value.author' gdk-config.json)
COMPONENT_NAME_DIR=$(echo $COMPONENT_NAME | tr '.' '-')

rm -rf greengrass-build
mkdir -p greengrass-build/artifacts/$COMPONENT_NAME/NEXT_PATCH
mkdir -p greengrass-build/recipes
cp recipe.yaml greengrass-build/recipes/recipe.yaml
sed -i "s/{COMPONENT_NAME}/$COMPONENT_NAME/" greengrass-build/recipes/recipe.yaml
sed -i "s/{COMPONENT_AUTHOR}/$COMPONENT_AUTHOR/" greengrass-build/recipes/recipe.yaml
sed -i "s/{AWS_ACCOUNT_ID}/$AWS_ACCOUNT_ID/" greengrass-build/recipes/recipe.yaml
sed -i "s/{AWS_REGION}/$AWS_REGION/" greengrass-build/recipes/recipe.yaml
sed -i "s/{COMPONENT_NAME_DIR}/$COMPONENT_NAME_DIR/" greengrass-build/recipes/recipe.yaml

docker build -t $COMPONENT_NAME_DIR .
docker tag $COMPONENT_NAME_DIR:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$COMPONENT_NAME_DIR:latest

if aws ecr describe-repositories --region $AWS_REGION --repository-names $COMPONENT_NAME_DIR &amp;gt; /dev/null 2&amp;gt;&amp;amp;1
then
    echo "Repository $COMPONENT_NAME_DIR already exists."
else
    # Create the repository if it does not exist
    aws ecr create-repository --region $AWS_REGION --repository-name $COMPONENT_NAME_DIR
    echo "Repository $COMPONENT_NAME_DIR created."
fi
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$COMPONENT_NAME_DIR:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script assumes the AWS CLI is installed and gets all the necessary configuration from the &lt;code&gt;gdk-config.json&lt;/code&gt; . The build script will create the appropriate recipe, build the docker image, login to ECR and push it referencing the component name set in the &lt;code&gt;gdk-config.json&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Finally the Nomad Job for deploying the application will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;job "nomad-docker-pub-example" {
    datacenters = ["dc1"]
    type = "service"
    group "pub-example-group" {
        count = 1
        constraint {
            attribute = "\${meta.greengrass_ipc}"
            operator  = "="
            value     = "client"
        }
        task "pub-example-task" {
            driver = "docker"
            config {
                image = "{AWS_ACCOUNT_ID}.dkr.ecr.{AWS_REGION}.amazonaws.com/{COMPONENT_NAME_DIR}:latest"
                command = "/bin/bash"
                args = ["-c", "socat UNIX-LISTEN:\$AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT,fork,nonblock TCP-CONNECT:\$GGV2_SERVER_IPC_ADDRESS,nonblock &amp;amp; python3 -u /pyfiles/app.py "]
            }
            env {
                AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT = "/tmp/ipc.socket"
                SVCUID="$SVCUID"
            }
            template {
                data = &amp;lt;&amp;lt;EOF
# Get all services and add them to env variables with their names
{{ range nomadServices }}
    {{- range nomadService .Name }}
    {{ .Name | toUpper | replaceAll "-" "_" }}_ADDRESS={{ .Address}}:{{ .Port }}{{- end }}
{{ end -}}
EOF
                destination = "local/env"
                env = true
            }
        }
    }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Constraint - 
We are starting with our constraint where this application should be deployed. In this scenario, it would be targeting only Nomad clients where &lt;code&gt;greengrass_ipc=client&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Task - 
Next, we have our task with the Docker driver. Here, we get the image from the ECR, where the variables &lt;code&gt;AWS_ACCOUNT_ID&lt;/code&gt;, &lt;code&gt;AWS_ACCOUNT_ID&lt;/code&gt;, and &lt;code&gt;COMPONENT_NAME_DIR&lt;/code&gt; will be replaced by the build script with the appropriate values. Finally, we come to our &lt;code&gt;command&lt;/code&gt; and &lt;code&gt;args&lt;/code&gt;. These values would override what is already defined by the &lt;code&gt;Dockerfile&lt;/code&gt;. In this scenario, we first create the &lt;code&gt;ipc.socket&lt;/code&gt; required by the application using &lt;code&gt;socat&lt;/code&gt;. The &lt;code&gt;SVCUID&lt;/code&gt; will be then provided by the Greengrass component at the time of running the job, thus provided as an environment variable inside the Docker container. &lt;/li&gt;
&lt;li&gt;Template - 
After that, we have a template section that we require to obtain the IP address of our &lt;code&gt;ggv2-server-ipc&lt;/code&gt; service that we created earlier. We do this by listing all the services and getting the IP addresses and exporting them as environment variables by also converting their names to uppercase letters and appending &lt;code&gt;_ADDRESS&lt;/code&gt; at the end. This provides our env variable &lt;code&gt;GGV2_SERVER_IPC_ADDRESS&lt;/code&gt; for our &lt;code&gt;socat&lt;/code&gt; command that then looks like this:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;socat UNIX-LISTEN:$AWS_GG_NUCLEUS_DOMAIN_SOCKET_FILEPATH_FOR_COMPONENT,fork,nonblock TCP-CONNECT:$GGV2_SERVER_IPC_ADDRESS,nonblock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which provides the &lt;code&gt;ipc.socket&lt;/code&gt; before running the app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 -u /pyfiles/app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we have this set, we can then go build and publish the component by doing &lt;code&gt;gdk build&lt;/code&gt; and &lt;code&gt;gdk publish&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Additionally in order to deploy this to the targets we will need to add this to a &lt;code&gt;deployment.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        "nomad.docker.pub": {
            "componentVersion": "1.0.0",
            "runWith": {}
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;respecting the name of the component and the version provided by the GDK.&lt;br&gt;
After that executing the command below will deploy it to our target:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws greengrassv2 create-deployment \
    --cli-input-json file://deployment.json\
    --region ${AWS_REGION}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will be ready to scale our application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling the Application
&lt;/h2&gt;

&lt;p&gt;As of now we should have our application running and publishing the data from a single client, however if we require to spread this application across the cluster, in this scenario second client, and have it report the memory and CPU usage, we can do this by simply changing the &lt;code&gt;count=1&lt;/code&gt; to &lt;code&gt;count=2&lt;/code&gt; in our job file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--- a/examples/nomad/nomad-docker-pub/recipe.yaml
+++ b/examples/nomad/nomad-docker-pub/recipe.yaml
@@ -31,7 +31,7 @@ Manifests:
             type = "service"

             group "pub-example-group" {
-              count = 1
+              count = 2
               constraint {
                 attribute = "\${meta.greengrass_ipc}"
                 operator  = "="
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And use the same method to redeploy.&lt;/p&gt;

&lt;p&gt;Now if we go to AWS console and under AWS IoT → MQTT test client we can subscribe to topics &lt;code&gt;&amp;lt;NOMAD_SHORT_ALLOC_ID&amp;gt;/iot/telemetry&lt;/code&gt; and should be able to see messages coming. In order to get this ID we can simply run the following command on our device where the nomad server is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nomad status nomad-docker-pub-example
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we will find the ID int the Allocations section that looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Allocations
ID        Node ID   Task Group         Version  Desired  Status  Created     Modified
5dce9e1a  6ad1a15c  pub-example-group  10       run      running 1m28s ago   32s ago
8c517389  53c96910  pub-example-group  10       run      running 1m28s ago   49s ago
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will then allow us to construct those MQTT topics and start seeing the messages coming from those two instances of our application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zyv44xltvakvcpbxe1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1zyv44xltvakvcpbxe1o.png" alt="MQTT Client"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next part we will take a look on how to access AWS services from the device using Token Exchange Service (TES).&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this blog post, we covered how to expose the Greengrass IPC in a Nomad cluster, allowing a containerized application to publish metrics to AWS IoT Core using Greengrass IPC. We demonstrated how to create a proxy using &lt;code&gt;socat&lt;/code&gt; to forward the &lt;code&gt;ipc.socket&lt;/code&gt; via network (TCP) and how to set up an application that reports memory and CPU usage. We also showed how to scale the application across multiple clients in the cluster.&lt;/p&gt;

&lt;p&gt;By using AWS IoT Greengrass and HashiCorp Nomad, you can effectively manage, scale and monitor distributed embedded systems, making it easier to deploy and maintain complex IoT applications.&lt;/p&gt;

&lt;p&gt;If you have any feedback about this post, or you would like to see more related content, please reach out to me here, or on &lt;a href="https://twitter.com/nenadilic84" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/nenadilic84/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
