<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Michael O'Brien</title>
    <description>The latest articles on DEV Community by Michael O'Brien (@embedthis).</description>
    <link>https://dev.to/embedthis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/embedthis"/>
    <language>en</language>
    <item>
      <title>IoT Security Updates</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Thu, 17 Jul 2025 03:24:55 +0000</pubDate>
      <link>https://dev.to/embedthis/iot-security-updates-3c8e</link>
      <guid>https://dev.to/embedthis/iot-security-updates-3c8e</guid>
      <description>&lt;p&gt;Many companies have demonstrated the value of regularly enhancing product performance through software and firmware updates. Apple is a well-known example, delivering seamless device updates that improve functionality and user experience. But they're not alone—businesses of all sizes are increasingly using remote updates to boost product capabilities and address security concerns.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In fact, it's becoming a legal obligation in many regions to provide security updates throughout a device’s lifetime.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The European Union has introduced the Cyber Resilience Act (CRA), a regulation aimed at enhancing cybersecurity for IoT products. This legislation mandates that manufacturers ensure their products are secure throughout their entire lifecycle, from design to decommissioning. This requires that software updates are provided for the lifetime of the device.&lt;/p&gt;

&lt;p&gt;That said, smoothly updating a fleet of devices—without issues or downtime—can be a real challenge.&lt;/p&gt;

&lt;p&gt;With the Ioto Update Manager, you can create, deploy, manage, and monitor over-the-air updates for your IoT devices, ensuring they remain secure, functional, and up-to-date.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ioto Iot Device Update
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.embedthis.com/doc/builder/software/" rel="noopener noreferrer"&gt;Ioto IoT Update Manager&lt;/a&gt; offers the following capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upload device software images for dissemination&lt;/li&gt;
&lt;li&gt;Distribute software images efficiently and reliably using a global CDN&lt;/li&gt;
&lt;li&gt;Use distribution policies to target specific subsets of device populations&lt;/li&gt;
&lt;li&gt;Implement gradual update rollouts &lt;/li&gt;
&lt;li&gt;Throttle update rate to minimize load and risk&lt;/li&gt;
&lt;li&gt;Ensure secure and dependable software delivery through TLS and cryptographic checksums&lt;/li&gt;
&lt;li&gt;Track customer base update progress with detailed reports and metrics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Ioto update manager allows you to selectively distribute updates to any chosen group of devices based on a distribution policy. This enables you to update all your devices or only specific groups as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Components of Device Update
&lt;/h2&gt;

&lt;p&gt;The Ioto update solution has three major components:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Builder&lt;/td&gt;
&lt;td&gt;Portal to create and manage software updates and distribution policies&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ioto Cloud Service&lt;/td&gt;
&lt;td&gt;Service to securely store software updates and distribute via a global CDN&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Device Agents&lt;/td&gt;
&lt;td&gt;Device-resident software to poll, download and apply software updates to the device&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The &lt;strong&gt;Builder&lt;/strong&gt; assists in preparing a software update for distribution, which includes uploading the device software image, specifying the version, and setting a distribution policy. It also offers comprehensive monitoring and reporting of your device population and update performance.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Ioto Cloud Service&lt;/strong&gt; hosts the device software images and facilitates communication with devices to deliver the updates to the relevant devices.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Device Agents&lt;/strong&gt; contain the necessary logic to interact with the cloud service, enabling them to download and install new software images as they become available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Device Agent Support
&lt;/h2&gt;

&lt;p&gt;Using the Builder, you can deploy updates to any device, regardless of the device agent you utilize.  The Ioto device agent pre-integrates the software update capability. Other device agents can use the stand-alone &lt;a href="https://github.com/embedthis/updater" rel="noopener noreferrer"&gt;EmbedThis Updater&lt;/a&gt;. The GoAhead and Appweb device agents bundle the Updater code with their release distributions.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;EmbedThis Updater&lt;/strong&gt; is a command line utility that can request, fetch and apply software updates. Versions are provided in three forms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;C program utility&lt;/li&gt;
&lt;li&gt;C program library&lt;/li&gt;
&lt;li&gt;NodeJS utility&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;updater &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--device&lt;/span&gt; &lt;span class="s2"&gt;"ABCDEF1234"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--host&lt;/span&gt; &lt;span class="s2"&gt;"https://abcdefghij.execute-api.ap-southeast-1.amazonaws.com"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--product&lt;/span&gt; &lt;span class="s2"&gt;"000001234567890AAKW996CZHH"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--token&lt;/span&gt; &lt;span class="s2"&gt;"00000001234567890AABBEGYJB"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--version&lt;/span&gt; &lt;span class="s2"&gt;"1.2.3"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--file&lt;/span&gt; updater.bin &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cmd&lt;/span&gt; ./apply.sh &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;pro &lt;span class="nv"&gt;ports&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;16 &lt;span class="nv"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;256
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Regardless of the device agent you use, the underlying update API and Builder Update service is the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Device Updates
&lt;/h2&gt;

&lt;p&gt;To define a device software update, you supply the following parameters to the Builder:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Target Device Product ID&lt;/li&gt;
&lt;li&gt;Software update version number&lt;/li&gt;
&lt;li&gt;Software Update image&lt;/li&gt;
&lt;li&gt;Distribution policy&lt;/li&gt;
&lt;li&gt;Update distribution limits&lt;/li&gt;
&lt;li&gt;Rollout pacing factors&lt;/li&gt;
&lt;li&gt;Distribution Device Cloud&lt;/li&gt;
&lt;li&gt;Update description&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.embedthis.com%2Fimages%2Fbuilder%2Fsoftware-edit.avif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.embedthis.com%2Fimages%2Fbuilder%2Fsoftware-edit.avif" alt="Software Edit" width="2398" height="1628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Product Selection
&lt;/h2&gt;

&lt;p&gt;A Builder account may manage many devices that have different update policies and usually require different firmware. When defining a software update, you nominate a Builder product definition for which the update applies.&lt;/p&gt;

&lt;p&gt;For each family of devices that require the same software update images, you should create a Builder product definition. When the product definition is created, the Builder also creates a product ID token. This token is included in the device upgrade request to select the appropriate product and software update. The product token is obtained from the &lt;a href="https://admin.embedthis.com/tokens/" rel="noopener noreferrer"&gt;Builder Token List&lt;/a&gt; after creating the product definition.&lt;/p&gt;

&lt;p&gt;The Builder uses the product ID token paired with the update distribution policy to define the subset of devices that are eligible to receive the update. To receive a software update, a device specifies a Builder Product Token that selects the product for which software updates may be published. &lt;/p&gt;

&lt;h2&gt;
  
  
  Device Cloud Selection
&lt;/h2&gt;

&lt;p&gt;Software updates are reliably stored in a device cloud and distributed globally via the AWS CDN to local regions. When defining updates, you can select your device cloud to store and manage the updates. Select the device cloud from the pulldown list.  The update facility is designed to scale and will handle device populations up to and beyond 10,000,000 devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Version
&lt;/h2&gt;

&lt;p&gt;The software update version number is your device's version number. The version numbers must be compatible with the &lt;a href="https://semver.org/" rel="noopener noreferrer"&gt;SemVer 2.0&lt;/a&gt; version specification.&lt;/p&gt;

&lt;p&gt;For Ioto, the current version for your device is defined via the &lt;strong&gt;version&lt;/strong&gt; property in the &lt;strong&gt;ioto.json5&lt;/strong&gt; configuration file. For other device agents, the version is provided in the update API request.&lt;/p&gt;

&lt;p&gt;The update description can be an informative description for your purposes. It is recommended to describe the purpose and extent of the update. &lt;/p&gt;

&lt;h2&gt;
  
  
  Software Distribution
&lt;/h2&gt;

&lt;p&gt;At regular intervals, and typically once per day, device agents should connect to the Device Cloud for a "checkin" to see if any update has been published. During the checkin, the device agent will submit the Product ID, Device ID and other device-specific information that can be used when evaluating the distribution policy to determine if an update is available and suitable for this device.&lt;/p&gt;

&lt;p&gt;If you are running the Ioto agent, it will automatically perform a checkin according to the schedule defined in the &lt;strong&gt;ioto.json5&lt;/strong&gt; configuration file. If you are using the EmbedThis Update utility, you should schedule that to run regularly using Cron or a similar facility.&lt;/p&gt;

&lt;p&gt;The device cloud service will evaluate the distribution policy expression when the device checks-in. The device cloud will retrieve the most recent updates and check the updates in reverse version order and select the first matching update for the device. If the policy matches for the device, the URL for the update image will be returned to the device agent. &lt;/p&gt;

&lt;p&gt;The device agent will then download the update image and verify the integrity of the update image.  If verified, an update script is invoked to apply the update. &lt;/p&gt;

&lt;p&gt;If you are running the Ioto agent, the &lt;strong&gt;"scripts/update"&lt;/strong&gt; script will be invoked to apply the update. You should customize this script to suit your device. If you are running Ioto on an RTOS, without scripting, you will need to watch and react to the Ioto event &lt;strong&gt;device:update&lt;/strong&gt; using the &lt;strong&gt;rWatch&lt;/strong&gt; API.&lt;/p&gt;

&lt;p&gt;If you are running the EmbedThis updater, you should customize the &lt;strong&gt;apply.sh&lt;/strong&gt; script to apply the update to your device.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Distribution Policy
&lt;/h2&gt;

&lt;p&gt;The update distribution policy enables you to target specific relevant subsets of your device populations.  &lt;/p&gt;

&lt;p&gt;The distribution policy is a simple JavaScript-like expression that is evaluated by the device cloud at runtime to determine if the update is relevant for a specific device. If you leave the policy blank, then all devices with a version that is earlier than the software update version will be updated.&lt;/p&gt;

&lt;p&gt;Here is a sample policy expression:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;major &amp;gt;= 1 &amp;amp;&amp;amp; minor &amp;gt;= 1 &amp;amp;&amp;amp; patch &amp;gt;= 5 &amp;amp;&amp;amp; memory &amp;gt;= 256 &amp;amp;&amp;amp; ports == 32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The device properties submitted with the update request are made available as expression variables. In this case, the "memory" and "ports" variables are supplied with the update request. If you are using the Ioto device agent, these variable can be defined in the device.json5 file.&lt;/p&gt;

&lt;p&gt;The full device version is accessible as the policy variable "version" and the version of the software update is provided via the "newVersion" variable. The device version string is also split into SemVer components: major, minor and patch.&lt;/p&gt;

&lt;p&gt;The default policy is uses the inbuilt function &lt;strong&gt;semver&lt;/strong&gt; which compares two version strings. This default policy compares if the current version is earlier than the new update version using the following expression.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;semver(version, "&amp;lt;", newVersion)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The policy expression language understands the types: Numbers, Boolean, String literals, Regular Expressions and null. Strings are quoted with either single or double quotes.&lt;/p&gt;

&lt;p&gt;Sub-expressions can be grouped with parenthesis and the boolean operators &amp;amp;&amp;amp; and || can group conditional operands. Regular expressions (delimited by slashes) may be used with the "==" and "!=" operators. The regular expression can be on either side of the operator.&lt;/p&gt;

&lt;p&gt;The policy expression is run-time limited to evaluate up to 50 expression terms. This is to protect the device cloud and service against denial of service attacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Update Rollout Limits
&lt;/h2&gt;

&lt;p&gt;Implementing a gradual update strategy can help minimize load and risk. Updating a large number of devices simultaneously can impose an excessive burden on your service, so staggering the rollout can distribute the load more evenly.&lt;/p&gt;

&lt;p&gt;Despite thorough testing, some updates might still be considered "risky." To minimize this risk, you can update a small subset of your device population first to see if the update causes any issues.&lt;/p&gt;

&lt;p&gt;The update service provides update limits and gradual rollout factors that allow you to control the rate of updates. The following mechanisms are supported:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Max device limit&lt;/strong&gt; and &lt;strong&gt;device percentage&lt;/strong&gt; limits define the maximum number of devices that can be updated. Once either limit is reached, further updates are suspended.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For gradual rollouts, you can define an &lt;strong&gt;update rate&lt;/strong&gt;. This is implemented via a &lt;strong&gt;max updates per period&lt;/strong&gt;, which limits the number of updates to a specified number of updates over a defined time period. For example, you could set a limit of 1000 updates per hour (3600 seconds).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If a device meets the distribution policy and rollout limits, the URL for the update image will be returned to the update agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Analytics, Metrics &amp;amp; Reports
&lt;/h2&gt;

&lt;p&gt;The Builder provides extensive analytics to track the progress and performance of updates. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.embedthis.com%2Fimages%2Fbuilder%2Fsoftware-metrics.avif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.embedthis.com%2Fimages%2Fbuilder%2Fsoftware-metrics.avif" alt="Software Metrics" width="1024" height="895"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Ioto device cloud tracks metrics uniquely per-product and for each product update version.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Dimensions&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;UpdateTotal&lt;/td&gt;
&lt;td&gt;Product, Product/Version&lt;/td&gt;
&lt;td&gt;Total number of devices&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UpdateDeferred&lt;/td&gt;
&lt;td&gt;Product, Product/Version&lt;/td&gt;
&lt;td&gt;Number of updates temporarily deferred due to rollout policies&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UpdateStarted&lt;/td&gt;
&lt;td&gt;Product, Product/Version&lt;/td&gt;
&lt;td&gt;Number of updates started&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UpdateSuccess&lt;/td&gt;
&lt;td&gt;Product, Product/Version&lt;/td&gt;
&lt;td&gt;Number of successful updaes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;UpdateFailed&lt;/td&gt;
&lt;td&gt;Product, Product/Version&lt;/td&gt;
&lt;td&gt;Number of failed updates&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The Builder Update list also includes metrics for tracking how many devices are using each update:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.embedthis.com%2Fimages%2Fbuilder%2Fsoftware-list-metrics.avif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.embedthis.com%2Fimages%2Fbuilder%2Fsoftware-list-metrics.avif" alt="Software List" width="1932" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;With the Ioto IoT Device Updater, you can seamlessly update your devices to quickly and reliably address security issues and deliver increased functionality and performance to your users.&lt;/p&gt;



&lt;h2&gt;
  
  
  Want More Now?
&lt;/h2&gt;

&lt;p&gt;To learn more about EmbedThis Ioto, please read:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/embedthis/updater/" rel="noopener noreferrer"&gt;EmbedThis Updater&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/ioto/" rel="noopener noreferrer"&gt;Ioto Web Site&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/doc/agent/" rel="noopener noreferrer"&gt;Ioto Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://admin.embedthis.com" rel="noopener noreferrer"&gt;Ioto Agent Download&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/doc/builder/" rel="noopener noreferrer"&gt;Builder Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com" rel="noopener noreferrer"&gt;Embedthis Web Site&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>iot</category>
      <category>embedded</category>
      <category>ai</category>
      <category>cloud</category>
    </item>
    <item>
      <title>IoT AI with Ioto</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Thu, 17 Jul 2025 03:08:46 +0000</pubDate>
      <link>https://dev.to/embedthis/iot-ai-with-ioto-2jb0</link>
      <guid>https://dev.to/embedthis/iot-ai-with-ioto-2jb0</guid>
      <description>&lt;p&gt;Artificial Intelligence (AI) significantly enhances edge devices by enabling more intelligent, autonomous operations. The recent advances in large language models (LLMs) running in the cloud are leading to transformative applications in the IoT space.&lt;/p&gt;

&lt;p&gt;Developers typically select from three principal AI integration patterns: on-device models, cloud-based models, and hybrid models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On-device language models&lt;/strong&gt; operate entirely within the local hardware environment. This approach offers data privacy, reduced latency, and consistent operation regardless of network conditions, making it ideal for real-time applications or devices with intermittent connectivity or stringent privacy requirements. However, the complexity and scale of these models are constrained by the limited computational resources of edge devices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud-based language models&lt;/strong&gt; offload computationally intensive processing to cloud servers, enabling the use of robust, large-scale LLMs that surpass the resource capabilities of edge devices. This design provides advanced features, seamless scalability, and simplified updates. Nevertheless, it relies on continuous internet connectivity and may introduce latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hybrid approaches&lt;/strong&gt; combine on-device models with cloud-based models. In this pattern, tasks that are privacy-sensitive or critical are executed locally, while more complex, resource-intensive operations that are not time-sensitive are processed in the cloud. This approach effectively blends the strengths of privacy and responsiveness with the new capabilities offered by the latest generation of cloud-based models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ioto IoT AI
&lt;/h2&gt;

&lt;p&gt;Ioto provides an intuitive AI library that simplifies interactions with cloud-based LLMs, facilitating tasks such as data classification, sensor data interpretation, information extraction, and logical reasoning. This capability is particularly beneficial for applications in predictive maintenance, smart agriculture, healthcare, smart homes, and environmental monitoring. It’s ideal for analyzing non-real-time sensor data using the power of an LLM.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Ioto AI library can invoke cloud-based LLMs and run local agents that operate with full access to device context.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  OpenAI and Foundation Models
&lt;/h2&gt;

&lt;p&gt;Ioto supports the standard &lt;strong&gt;Chat Completions API&lt;/strong&gt; and also implements the newer &lt;strong&gt;OpenAI Response API&lt;/strong&gt; which is designed to help developers create advanced AI agents and workflows capable of performing cloud-based tasks like web searches, file retrievals, and invoking local agents and tools.&lt;/p&gt;

&lt;p&gt;The standard OpenAI Chat Completions API is supported by most other foundation models. We expect other vendors to follow OpenAI's lead and add support for the Response API to their offerings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Ioto AI
&lt;/h2&gt;

&lt;p&gt;The Ioto distribution includes an &lt;strong&gt;ai&lt;/strong&gt; sample app that demonstrates the AI facilities of Ioto. &lt;/p&gt;

&lt;p&gt;Before building, you need an OpenAI account and an API key. You can get an API key from the &lt;a href="https://platform.openai.com/api-keys" rel="noopener noreferrer"&gt;OpenAI API Keys&lt;/a&gt; page.&lt;/p&gt;

&lt;p&gt;Once you have an API key, you can edit the &lt;strong&gt;apps/ai/config/ioto.json5&lt;/strong&gt; file to define your access key and preferred model. Alternatively, you can provide your OpenAI key via the &lt;code&gt;OPENAI_API_KEY&lt;/code&gt; environment variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ai: {
    enable: true,
    provider: "openai",
    model: "gpt-4o",
    endpoint: "https://api.openai.com/v1",
    key: "sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
services: {
    ai: true,
    ...
},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;services.ai&lt;/code&gt; controls whether the AI service is compiled, while the &lt;code&gt;ai.enable&lt;/code&gt; setting enables or disables it at runtime.&lt;/p&gt;

&lt;p&gt;If you are using another foundation LLM other than OpenAI, you can define the API endpoint for that service via the &lt;code&gt;endpoint&lt;/code&gt; property. &lt;/p&gt;

&lt;h3&gt;
  
  
  Building Ioto and the AI Sample App
&lt;/h3&gt;

&lt;p&gt;When you build the Ioto Agent, you can select the &lt;strong&gt;ai&lt;/strong&gt; sample app to play with the AI capabilities. This will enable the Ioto AI service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;make &lt;span class="nv"&gt;APP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  AI Sample App
&lt;/h2&gt;

&lt;p&gt;The AI sample app has five web pages that are used to initiate different tests:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;App&lt;/th&gt;
&lt;th&gt;Page&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;th&gt;API Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Chat&lt;/td&gt;
&lt;td&gt;chat.html&lt;/td&gt;
&lt;td&gt;ChatBot&lt;/td&gt;
&lt;td&gt;Use the OpenAI Chat Completions API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chat&lt;/td&gt;
&lt;td&gt;responses.html&lt;/td&gt;
&lt;td&gt;ChatBot&lt;/td&gt;
&lt;td&gt;Use the new OpenAI Response API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chat&lt;/td&gt;
&lt;td&gt;stream.html&lt;/td&gt;
&lt;td&gt;ChatBot&lt;/td&gt;
&lt;td&gt;Use the new OpenAI Response API with streaming&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chat&lt;/td&gt;
&lt;td&gt;realtime.html&lt;/td&gt;
&lt;td&gt;ChatBot&lt;/td&gt;
&lt;td&gt;Use the OpenAI Chat Real-Time API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Patient&lt;/td&gt;
&lt;td&gt;patient.html&lt;/td&gt;
&lt;td&gt;Patient Monitoring&lt;/td&gt;
&lt;td&gt;Use the new OpenAI Response API and invoke sub-agents&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypr76swytj8t7sq1ufp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fypr76swytj8t7sq1ufp5.png" alt="Patient Monitoring" width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Patient&lt;/strong&gt; app demonstrates a patient monitor using an AI agentic workflow. The app measures a patient's temperature locally by calling the &lt;code&gt;getTemp()&lt;/code&gt; function. It sends the temperature to the cloud LLM which determines if the patient is in urgent need of medical attention. If so, it responds to instruct the device workflow to call the ambulance by using the local &lt;code&gt;callEmergency()&lt;/code&gt; function. The web page has two buttons to start and stop the monitoring process. This app demonstrates the use of the OpenAI Response API and local agent functions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vunadkezc37gbcl0vhe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vunadkezc37gbcl0vhe.png" alt="Patient Monitoring" width="800" height="750"&gt;&lt;/a&gt;&lt;br&gt;
The &lt;strong&gt;Chat&lt;/strong&gt; demo is similar to the consumer ChatGPT website. Each web page is a simple ChatBot that issues requests via the Ioto local web server to the relevant OpenAI API. The requests are relayed to the OpenAI service and the responses are passed back to the web page to display.&lt;/p&gt;

&lt;p&gt;The sample apps register web request action handlers in the &lt;code&gt;aiApp.c&lt;/code&gt; file. These handlers respond to the web requests and in turn issue API calls to the OpenAI service. Responses are then passed back to the web page to display. &lt;/p&gt;

&lt;p&gt;Note: The AI App does not require cloud-based management to be enabled.&lt;/p&gt;
&lt;h2&gt;
  
  
  Code Example
&lt;/h2&gt;

&lt;p&gt;Here is an example calling the Responses API to ask a simple question. This example uses file search (aka &lt;a href="https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/" rel="noopener noreferrer"&gt;RAG&lt;/a&gt;) to augment the pre-trained knowledge of the LLM and a local function to get the local weather temperature.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="cp"&gt;#include&lt;/span&gt; &lt;span class="cpf"&gt;"ioto.h"&lt;/span&gt;&lt;span class="cp"&gt;
&lt;/span&gt;
&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;example&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;void&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;cchar&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;vectorId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"PUT_YOUR_VECTOR_ID_HERE"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kt"&gt;char&lt;/span&gt;  &lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

    &lt;span class="cm"&gt;/*
        SDEF is used to concatenate literal strings into a single string.
        SFMT is used to format strings with variables.
        jsonParse converts the string into a JSON object.
     */&lt;/span&gt;
    &lt;span class="n"&gt;Json&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;jsonParse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SFMT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SDEF&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="nl"&gt;model:&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;gpt&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="n"&gt;o&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;mini&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nl"&gt;input:&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;What&lt;/span&gt; &lt;span class="n"&gt;is&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;capital&lt;/span&gt; &lt;span class="n"&gt;of&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;moon&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nl"&gt;tools:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
            &lt;span class="nl"&gt;type:&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;file_search&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nl"&gt;vector_store_ids:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nl"&gt;type:&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;function&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nl"&gt;name:&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;getWeatherTemperature&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nl"&gt;description:&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;local&lt;/span&gt; &lt;span class="n"&gt;weather&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt; &lt;span class="n"&gt;vectorId&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;Json&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;openaiResponses&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;agentCallback&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Extract the LLM response text from the json payload&lt;/span&gt;
    &lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;jsonGet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"output_text"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Response: %s&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;jsonFree&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;jsonFree&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;openaiResponses&lt;/code&gt; API takes a JSON object that represents the OpenAI Responses API parameters. The &lt;code&gt;SDEF&lt;/code&gt; macro is a convenience to make it easier to define JSON objects in C code. The SFMT macro expands &lt;code&gt;printf-style&lt;/code&gt; expressions. The &lt;code&gt;jsonParse&lt;/code&gt; API parses the supplied string and returns an Ioto Json object which is passed to the &lt;code&gt;openaiResponses&lt;/code&gt; API.&lt;/p&gt;

&lt;p&gt;The response returned by openaiResponses is a JSON object that can be queried using the Ioto JSON library &lt;code&gt;jsonGet&lt;/code&gt; API. The &lt;code&gt;output_text&lt;/code&gt; field contains the complete response output text.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;agentCallback&lt;/code&gt; function is triggered when the LLM needs to invoke a local tool. It is defined as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;char&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nf"&gt;agentCallback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cchar&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Json&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Json&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;smatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"getWeatherTemperature"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;getTemp&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;sclone&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Unknown function, cannot comply with request."&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Consult the &lt;a href="https://platform.openai.com/docs/api-reference/responses" rel="noopener noreferrer"&gt;Responses API&lt;/a&gt; for parameter details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Want More?
&lt;/h2&gt;

&lt;p&gt;See the &lt;code&gt;apps/src/ai&lt;/code&gt; app included in the Ioto Agent source download for the example &lt;code&gt;responses.html&lt;/code&gt; web page that uses the Responses API and the &lt;code&gt;patient.html&lt;/code&gt; web page that uses the OpenAI Response API and local agent functions.&lt;/p&gt;

&lt;p&gt;Here is the documentation for the Ioto AI APIs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/doc/agent/ai/chat-completion.html" rel="noopener noreferrer"&gt;OpenAI Chat Completions API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/doc/agent/ai/responses.html" rel="noopener noreferrer"&gt;OpenAI Response API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/doc/agent/ai/stream.html" rel="noopener noreferrer"&gt;OpenAI Response Streaming API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/doc/agent/ai/real-time.html" rel="noopener noreferrer"&gt;OpenAI Real-Time API&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Consult the OpenAI documentation for API details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://platform.openai.com/docs/api-reference/responses" rel="noopener noreferrer"&gt;Responses API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.openai.com/docs/api-reference/chat" rel="noopener noreferrer"&gt;Chat Completion API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.openai.com/docs/api-reference/realtime" rel="noopener noreferrer"&gt;Realtime API&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reach out if you have any questions or feedback by posting a comment below or contacting us at &lt;a href="//mailto:sales@embedthis.com"&gt;sales@embedthis.com&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Ioto makes it easy to integrate powerful AI capabilities into your IoT devices—without the complexity. Whether you're building a smart appliance, medical sensor, or environmental monitor, Ioto’s flexible architecture and built-in AI tools help you move faster.&lt;/p&gt;

&lt;p&gt;Download the Ioto Agent, try out the sample apps, and start building smarter devices today.&lt;/p&gt;

</description>
      <category>iot</category>
      <category>embedded</category>
      <category>ai</category>
      <category>cloud</category>
    </item>
    <item>
      <title>The Future of IoT AI in 2025 and Beyond</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Thu, 17 Jul 2025 03:03:21 +0000</pubDate>
      <link>https://dev.to/embedthis/the-future-of-iot-ai-in-2025-and-beyond-4j9</link>
      <guid>https://dev.to/embedthis/the-future-of-iot-ai-in-2025-and-beyond-4j9</guid>
      <description>&lt;p&gt;Machine learning (ML) has become a cornerstone of smart, autonomous decision-making in IoT devices. These “smart devices” derive their intelligence from the ability to analyze and act quickly on sensor data, at the edge, and respond accordingly. &lt;/p&gt;

&lt;p&gt;Historically, microcontrollers were too limited for anything beyond basic rule-based logic. But with the advent of frameworks like TensorFlow Lite for Microcontrollers, we entered the era of &lt;a href="https://chatgpt.com/?q=TinyML" rel="noopener noreferrer"&gt;&lt;strong&gt;TinyML&lt;/strong&gt;&lt;/a&gt;, enabling machine learning on even the most resource-constrained devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Edge AI Meets Cloud: The Rise of Hybrid IoT AI
&lt;/h2&gt;

&lt;p&gt;While device-based models have steadily benefited from better microcontrollers and model optimization techniques, the AI landscape has seen an &lt;strong&gt;explosive leap in cloud model capabilities&lt;/strong&gt; in the past year. Foundation models such as OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini are now so advanced they can understand, generate, and reason across multiple modalities—text, speech, image, and sensor data.&lt;/p&gt;

&lt;p&gt;These models are now capable of powering use cases that were literally &lt;strong&gt;unthinkable just a couple of years ago&lt;/strong&gt;. As these cloud models continue to evolve, the boundary between edge and cloud capabilities is rapidly shifting.&lt;/p&gt;

&lt;p&gt;This changes the game for edge AI. Rather than a simple migration from cloud to edge, we’re now seeing a &lt;strong&gt;hybrid IoT AI architecture&lt;/strong&gt; emerge. Edge devices handle real-time, low-power inference, while the cloud provides deep reasoning, personalization, and large-scale pattern recognition.&lt;/p&gt;

&lt;p&gt;Edge AI is still essential — especially for real-time, low-latency, or privacy-focused applications. But as cloud AI grows exponentially in power and flexibility, hybrid architectures that combine edge inference with cloud intelligence are becoming the new standard.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;IoT AI no longer means all inference happens on the device.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A modern IoT device might use edge ML to locally detect a wake word, capture sensor anomalies, or manage immediate control loops — and then stream relevant data to a cloud model for deeper analysis, anomaly detection, or personalized insights. This hybrid model unlocks the best of both worlds: instant local reactions and the nearly limitless compute of the cloud.&lt;/p&gt;

&lt;p&gt;This paradigm shift doesn’t mean edge ML is obsolete — far from it. But the growing capability of cloud AI means edge models don’t need to carry the full burden of intelligence. Instead, edge devices are evolving into smart front-ends that delegate deeper reasoning and processing to the cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  New Possibilities
&lt;/h3&gt;

&lt;p&gt;This unlocks new possibilities for IoT AI applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;On-demand intelligence&lt;/strong&gt;: Devices can dynamically invoke cloud AI only when needed — for example, to send a low-res image for anomaly classification or trigger a cloud-based automation workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context-aware edge devices&lt;/strong&gt;: A smart home assistant can locally detect movement, then query a cloud model to determine whether it’s a pet, an intruder, or a family member—using household context and historical behavior.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge-tuned cloud services&lt;/strong&gt;: Platforms like EmbedThis Ioto offer APIs that let devices feed structured sensor data to cloud models, such as custom GPTs fine-tuned for your factory floor.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Cloud models can do things today we couldn't imagine last year&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;IoT Platforms like &lt;a href="https://www.embedthis.com/doc/agent/ai/" rel="noopener noreferrer"&gt;EmbedThis Ioto&lt;/a&gt; are now integrating direct IoT AI APIs to cloud models alongside the ability to run local inference.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Run ML on Microcontrollers?
&lt;/h2&gt;

&lt;p&gt;Machine learning is still important for IoT devices. Everyday electronics can become “smarter” by directly integrating ML models into microcontrollers. This means they can function without relying on an external processor or constant cloud access for tasks like signal processing, speech recognition, scene recognition, predictive maintenance, or anomaly detection.&lt;/p&gt;

&lt;p&gt;Running ML models directly on microcontrollers enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time processing&lt;/strong&gt; with minimal latency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Energy efficiency&lt;/strong&gt;, vital for battery-powered or low-power devices&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved privacy and security&lt;/strong&gt; by keeping data local&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Device Autonomy&lt;/strong&gt; for remote, disconnected or bandwidth-constrained environments&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;We must now consider what AI tasks should run locally and what should run in the cloud.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As microcontrollers continue to improve in capability — with better DSPs, more onboard RAM, and built-in AI accelerators — they can support increasingly sophisticated models. However, developers must now think not just about what can run locally, but &lt;strong&gt;what should run locally versus in the cloud&lt;/strong&gt;. For instance, basic anomaly detection may happen at the edge, while complex root-cause analysis is handled in the cloud by large foundation models.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Role of Cloud Models
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.embedthis.com%2Fimages%2Fblog%2Fedge-to-cloud.avif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.embedthis.com%2Fimages%2Fblog%2Fedge-to-cloud.avif" alt="Edge meets cloud" width="1024" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Despite the rapid evolution of edge hardware, some AI tasks are too large, complex, or data-hungry to run efficiently on embedded hardware. While edge devices excel at fast, local decision-making, there’s a growing class of applications that benefit from offloading high-level inference and reasoning to the cloud.&lt;/p&gt;

&lt;p&gt;Thanks to accessible cloud-based AI APIs and low-latency connectivity, edge devices can invoke powerful foundation models on demand—tapping into capabilities like natural language processing, multimodal and deep reasoning.&lt;/p&gt;

&lt;p&gt;This enables a flexible, dynamic collaboration between edge and cloud. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;low-powered environmental sensor&lt;/strong&gt; can summarize temperature trends locally, then call a cloud model to predict equipment failure based on similar historical patterns.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;logistics scanner&lt;/strong&gt; might capture visual damage indicators and request cloud-based assistance to classify the severity and recommend next steps.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;medical wearable&lt;/strong&gt; can track biometric data in real time, but push anomalies to a cloud model trained on population-scale datasets for further analysis and multilingual patient feedback.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These scenarios are no longer experimental. Enterprises and device makers are deploying them today—using model APIs from a growing ecosystem of providers. Whether leveraging large language models, vision transformers, or speech models, the goal is the same: push only what’s needed to the cloud, and only when it adds value.&lt;/p&gt;

&lt;p&gt;Even on the device itself, we're beginning to see tiny variants of large models—quantized, distilled, or pruned—running directly on AI-capable microcontrollers and edge SoCs. This allows for partial inference locally, followed by cloud-based reasoning. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A compact vision model might detect movement on-device, while a cloud model interprets the activity to identify species under threat.&lt;/li&gt;
&lt;li&gt;A quantized NLP model might perform wake-word detection or intent classification at the edge, with complex dialogue managed in the cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The takeaway? &lt;strong&gt;Cloud-based large models&lt;/strong&gt; aren’t replacing edge ML—they’re augmenting it. As hardware and software evolve, we’re moving toward a more nuanced AI stack where tasks are dynamically split between device and cloud depending on compute needs, latency requirements, and context.&lt;/p&gt;




&lt;h2&gt;
  
  
  When to Use Edge vs. Cloud AI
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Edge&lt;/th&gt;
&lt;th&gt;Cloud&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Latency&lt;/td&gt;
&lt;td&gt;Ultra-low (ms)&lt;/td&gt;
&lt;td&gt;Higher, network-dependent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Privacy&lt;/td&gt;
&lt;td&gt;High – data kept local&lt;/td&gt;
&lt;td&gt;Depends on cloud platform&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Model Size&lt;/td&gt;
&lt;td&gt;Tiny (KB–MB)&lt;/td&gt;
&lt;td&gt;Massive (GBs–TBs)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scalability&lt;/td&gt;
&lt;td&gt;Limited by device resources&lt;/td&gt;
&lt;td&gt;Unlimited by cloud resources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Power Usage&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Use Cases&lt;/td&gt;
&lt;td&gt;Real-time reaction, offline ops&lt;/td&gt;
&lt;td&gt;Contextual reasoning, NLP, Pattern recognition&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best For&lt;/td&gt;
&lt;td&gt;Safety-critical, mobile, wearable&lt;/td&gt;
&lt;td&gt;Deep analytics, multimodal tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;The future is not about choosing edge &lt;em&gt;or&lt;/em&gt; cloud—it’s about orchestrating &lt;strong&gt;the best of both&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  How to Invoke Cloud Models
&lt;/h3&gt;

&lt;p&gt;Edge devices can invoke cloud AI in multiple ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Direct Call&lt;/strong&gt;: Use REST API or WebSocket to call the cloud model and get a response.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic Workflow&lt;/strong&gt;: The device agent provides a suite of tools (functions) that can be invoked by the cloud model (indirectly) as responses are received and processed by the device. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Trigger&lt;/strong&gt;: Device sensor data posted by the device to the cloud triggers IoT platform automations that then invoke the cloud model with the device data and return the result to the device.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What IoT AI Really Means for 2025
&lt;/h2&gt;

&lt;p&gt;While machine learning on microcontrollers has become more capable, the real story of 2025 is the rise of collaborative intelligence between edge and cloud. Edge ML frameworks like TensorFlow Lite continue to evolve — but they now sit within a broader AI ecosystem powered by powerful cloud models.&lt;/p&gt;

&lt;p&gt;For developers, this means new design choices and new tradeoffs. You no longer need to cram every ounce of intelligence into a tiny microcontroller. Instead, you can architect your system to act fast at the edge, think deep in the cloud, and unlock IoT AI experiences previously out of reach.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;a href="https://www.embedthis.com/" rel="noopener noreferrer"&gt;EmbedThis Ioto&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.embedthis.com%2Fimages%2Fpics%2Fcircuit-8.avif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.embedthis.com%2Fimages%2Fpics%2Fcircuit-8.avif" alt="Ioto" width="1024" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EmbedThis Ioto&lt;/strong&gt; is a modern &lt;a href="https://www.embedthis.com/blog/iot/what-is-an-iot-meta-platform.html" rel="noopener noreferrer"&gt;IoT Meta-platform&lt;/a&gt; designed to simplify the deployment of IoT AI-powered, connected devices. At its core is a compact, high-performance device agent that bridges edge intelligence with cloud-scale AI—enabling smart devices to both run local machine learning models &lt;em&gt;and&lt;/em&gt; invoke powerful foundation models via direct cloud APIs.&lt;/p&gt;

&lt;p&gt;With Ioto, edge devices can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;strong&gt;TinyML inference locally&lt;/strong&gt; in parallel with other device operations.&lt;/li&gt;
&lt;li&gt;Call &lt;strong&gt;cloud-based foundation models&lt;/strong&gt; for tasks that require deeper reasoning, analysis, large language processing, or multimodal understanding.&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;agentic workflows&lt;/strong&gt; locally using local agents and cloud-based models triggered by device data events or cloud-based automations.&lt;/li&gt;
&lt;li&gt;Seamlessly &lt;strong&gt;sync data and state&lt;/strong&gt; between the device and cloud with built-in MQTT, WebSockets, and RESTful HTTP support and then trigger cloud-based models and workflows based on device data events.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cloud Model Integration
&lt;/h3&gt;

&lt;p&gt;Ioto supports direct access to foundation model APIs, enabling devices to send structured data (e.g., sensor readings, text prompts, command requests) to the cloud and receive rich, contextual responses. &lt;/p&gt;

&lt;p&gt;Ioto also supports invoking cloud models via automated triggers that monitor device data in the cloud and invoke models to analyze the data and generate responses and run workflows.&lt;/p&gt;

&lt;p&gt;Ioto supports the following APIs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.embedthis.com/doc/agent/ai/chat-completion.html" rel="noopener noreferrer"&gt;OpenAI Chat Completions API&lt;/a&gt; – for conversational AI and natural language processing&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.embedthis.com/doc/agent/ai/responses.html" rel="noopener noreferrer"&gt;OpenAI Response API&lt;/a&gt; – for generating structured outputs and invoking agentic workflows&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.embedthis.com/doc/agent/ai/stream.html" rel="noopener noreferrer"&gt;OpenAI Streaming API&lt;/a&gt; – for real-time interaction with minimal latency&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.embedthis.com/doc/agent/ai/real-time.html" rel="noopener noreferrer"&gt;OpenAI Real-Time API&lt;/a&gt; – for continuous input/output workflows such as live monitoring or control&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Although the default integration targets OpenAI, the API design is model-agnostic and compatible with any provider that supports the Chat Completions-style interface—including models from Anthropic, Mistral, Google, and open-source deployments using tools like OpenRouter or OpenLLM. It is anticipated that many other cloud providers will add support for the newer Responses API in the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lightweight but Fully Equipped
&lt;/h3&gt;

&lt;p&gt;Despite its small footprint—less than 300K of code—Ioto packs a comprehensive feature set:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Embedded &lt;strong&gt;HTTP web server&lt;/strong&gt; with TLS for secure local UIs and APIs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MQTT client&lt;/strong&gt; and &lt;strong&gt;HTTP client&lt;/strong&gt; for robust cloud connectivity.&lt;/li&gt;
&lt;li&gt;Built-in &lt;strong&gt;WebSockets&lt;/strong&gt; support for real-time bi-directional communication.&lt;/li&gt;
&lt;li&gt;Embedded &lt;strong&gt;database&lt;/strong&gt; and &lt;strong&gt;JSON parser&lt;/strong&gt; for structured local data handling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Over-the-air (OTA)&lt;/strong&gt; firmware updates.&lt;/li&gt;
&lt;li&gt;Tight integration with &lt;strong&gt;AWS services&lt;/strong&gt;, including secure identity, storage, and messaging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud LLM Integration&lt;/strong&gt; for OpenAI, Anthropic, Mistral, and Google.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these capabilities make Ioto a versatile platform for building hybrid IoT AI architectures. Developers can deploy lightweight models directly on-device for speed and privacy, while calling on large cloud models for deeper, contextual tasks—without needing to reinvent their stack or overburden their microcontroller.&lt;/p&gt;

&lt;p&gt;Whether you're building smart appliances, industrial sensors, or edge gateways, Ioto offers a future-proof foundation for IoT AI-enabled devices that think fast locally and think big in the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;Consult the OpenAI documentation for API details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/doc/agent/" rel="noopener noreferrer"&gt;Ioto Agent Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.openai.com/docs/" rel="noopener noreferrer"&gt;OpenAI Response Docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>iot</category>
      <category>embedded</category>
      <category>ai</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Why 75% of IoT Projects Still Fail – and How to Beat the Odds</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Wed, 16 Jul 2025 07:36:15 +0000</pubDate>
      <link>https://dev.to/embedthis/why-75-of-iot-projects-still-fail-and-how-to-beat-the-odds-4foa</link>
      <guid>https://dev.to/embedthis/why-75-of-iot-projects-still-fail-and-how-to-beat-the-odds-4foa</guid>
      <description>&lt;p&gt;The Internet of Things (IoT) is finally delivering on its promise. As of 2025, 85% of organizations are running IoT projects, and 88% consider IoT critical to their business success&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. With global IoT spending heading toward $1 trillion&lt;sup id="fnref2"&gt;2&lt;/sup&gt;, enthusiasm is high. But success isn’t guaranteed. Many projects stall or collapse before reaching production.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Only 1 in 4 projects are deemed to be successful.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;blockquote&gt;
&lt;p&gt;TL;DR:&lt;br&gt;
Despite rising adoption, most IoT projects still fail due to unclear goals, poor integration, and scalability issues. This post outlines the latest failure stats and what businesses can do to improve outcomes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Latest IoT Project Statistics
&lt;/h2&gt;

&lt;p&gt;Recent industry surveys reveal that the majority of IoT projects do not achieve their intended outcomes. Estimates of IoT project failure rates typically range from &lt;strong&gt;60% up to 80%&lt;/strong&gt;. In other words, only roughly 1 in 4 IoT initiatives is ultimately considered successful.&lt;sup id="fnref3"&gt;3&lt;/sup&gt;&lt;sup id="fnref4"&gt;4&lt;/sup&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Failure Rates:&lt;/strong&gt; A 2024 analysis notes: "surveys consistently find that 80% of IoT projects don’t reach successful deployment."&lt;sup id="fnref4"&gt;4&lt;/sup&gt; Likewise, IoT experts estimate around 75% of IoT projects fail to achieve their desired results&lt;sup id="fnref5"&gt;5&lt;/sup&gt;. Many projects get stuck in proof-of-concept mode or are abandoned before they can scale. In fact, about 72% of IoT initiatives never progress beyond the pilot (PoC) phase into full production&lt;sup id="fnref2"&gt;2&lt;/sup&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Low Full-Success Rates:&lt;/strong&gt; Conversely, only a 20 to 30% minority of IoT projects are deemed fully successful. Cisco’s oft-cited survey found just 26% of IoT projects were completed successfully (with the rest stalled or failing)&lt;sup id="fnref3"&gt;3&lt;/sup&gt;. Microsoft’s IoT Signals report similarly observed roughly 30% of IoT projects fail at the PoC stage, usually due to high implementation costs or unclear business benefits&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. Even when projects do launch, many underperform – an IDC study of industrial firms found 31% of IoT/IIoT projects yielded only “minimal payback,” failing to meet their expected ROI&lt;sup id="fnref6"&gt;6&lt;/sup&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Signs of Improvement:&lt;/strong&gt; On a positive note, there are indications that IoT success rates are slowly improving as organizations gain experience. One 2023 research study (Beecham Research) reports a "28% improvement" in IoT project success metrics compared to 2020&lt;sup id="fnref7"&gt;7&lt;/sup&gt;. This suggests companies are learning from past failures and adopting better practices. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Nonetheless, IoT projects remain risky – success is far from guaranteed, and partial or outright failures are still common across industries.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;📉 75% of IoT projects fail to meet expectations.&lt;/li&gt;
&lt;li&gt;🔐 40% of companies cite security as the #1 challenge.&lt;/li&gt;
&lt;li&gt;💸 75% of projects take twice as long and run 45% over budget.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Top Reasons for IoT Project Failures
&lt;/h2&gt;

&lt;p&gt;Why do so many IoT initiatives struggle or fail? Recent surveys and expert analyses point to several recurring problem areas that derail IoT projects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unclear Objectives:&lt;/strong&gt; A leading cause of failure is starting an IoT project without well-defined business goals or success criteria. IoT projects are inherently complex, and without clear objectives, they are destined to fail&lt;sup id="fnref2"&gt;2&lt;/sup&gt;. Many organizations dive into IoT for the technology’s sake rather than to solve a specific business problem. This leads to stakeholder misalignment and unclear ROI. Only the few companies that set &lt;strong&gt;specific, measurable goals&lt;/strong&gt; and KPIs for IoT from the outset tend to “enjoy fruitful results”&lt;sup id="fnref2"&gt;2&lt;/sup&gt;. In contrast, vague goals or an undefined value proposition will almost guarantee project failure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration:&lt;/strong&gt; IoT deployments often falter due to integration challenges – connecting diverse devices, &lt;a href="https://www.embedthis.com/blog/iot/what-is-an-iot-platform.html" rel="noopener noreferrer"&gt;platforms&lt;/a&gt;, and data streams is difficult. Enterprises struggle to merge IoT data with existing IT systems and analytics tools. Studies show that less than half of IoT data (structured) gets actively used in decision-making, and under 1% of unstructured IoT data is ever analyzed&lt;sup id="fnref2"&gt;2&lt;/sup&gt;. This means many projects collect data but cannot integrate or utilize it effectively, yielding limited insights. Poor data integration and siloed systems thus lead to disappointing outcomes. Ensuring seamless &lt;strong&gt;device, data, and software integration&lt;/strong&gt; is crucial for IoT success, and failure to do so is a top reason projects under-deliver.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt; Moving an IoT proof-of-concept to a reliable, scaled deployment is a major hurdle. Many projects work in the lab but struggle with real-world scalability. A Gartner study found about 30% of IoT projects fail due to scalability problems – solutions that worked with dozens of devices cannot handle thousands&lt;sup id="fnref2"&gt;2&lt;/sup&gt;. IoT systems must be designed to handle growth in device count, data volume, and users, but companies often underestimate this. Complexities multiply when scaling (network strain, data overload, device management issues), resulting in stalled rollouts. Without a scalable architecture and plan for growth, IoT pilots can collapse when facing real-world scale.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security and Data Governance:&lt;/strong&gt; Security vulnerabilities are a frequent IoT project killer. IoT expands the cyber attack surface, and many projects fail to implement adequate security controls for devices and data. In a recent industry survey, 40% of organizations cited security concerns as the #1 challenge holding back IoT initiatives&lt;sup id="fnref8"&gt;8&lt;/sup&gt;. Weak security can lead to breaches or compliance issues that halt a project. Security is a major hurdle in successfully launching and managing IoT projects, and must be addressed end-to-end (device, network, cloud)&lt;sup id="fnref2"&gt;2&lt;/sup&gt;. Projects that treat security as an afterthought often face disaster – for instance, unresolved data privacy risks can prevent an IoT solution from ever going live. Robust security and privacy measures are therefore prerequisite for IoT success, and their absence is a top reason for failure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Timeline Delays and Budget Overruns:&lt;/strong&gt; IoT implementations frequently take far longer and cost more than planned, leading to stakeholder fatigue or funding cuts. One study estimates 75% of IoT projects take twice as long as initially scheduled&lt;sup id="fnref2"&gt;2&lt;/sup&gt;. Extended development and testing cycles can cause loss of momentum and executive support. Moreover, large IoT projects often run ~45% over budget while delivering 56% less value than anticipated&lt;sup id="fnref2"&gt;2&lt;/sup&gt;. Such cost overruns and lower-than-expected ROI make it hard to justify continuing. These timeline and budget issues are often linked to the factors above (underestimated complexity, integration woes, etc.). If the project’s costs spiral or value remains unproven, it may be deemed a failure. Careful planning, agile execution, and controlling scope are needed to avoid this fate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Skill Gaps and Organizational Challenges:&lt;/strong&gt; IoT initiatives demand a mix of IT, engineering, data science, and business domain expertise – a combination many organizations lack. Talent shortages in IoT are widely reported; for example, half of IoT adopters say they don’t have enough skilled workers or training in IoT&lt;sup id="fnref1"&gt;1&lt;/sup&gt;. Lacking in-house IoT skills often leads to design mistakes, integration errors, or reliance on misaligned third parties. Additionally, IoT projects cut across departments (IT, operations, R&amp;amp;D), so poor internal coordination or leadership support can doom projects. A survey by IoT World Today found lack of skilled personnel was the most-cited IoT challenge, and issues like insufficient management buy-in also impede progress[^8]. In short, organizations that don’t address the human and organizational factors – skills, alignment, change management – often see their IoT projects falter even if the technology pieces are in place.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Ensuring Success
&lt;/h2&gt;

&lt;p&gt;Despite the challenges, a growing body of experience and research highlights how organizations can improve IoT project success rates. Several key success factors and emerging trends are making a difference in 2025.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clarifying Objectives
&lt;/h3&gt;

&lt;p&gt;One of the most cited reasons for failure is unclear goals. But what does success look like?  Successful IoT programs start with a clear strategy and often use an agile, phased rollout. Rather than attempting a “big bang” deployment, leading adopters pilot their IoT solutions in stages, learn, and iterate.  &lt;/p&gt;

&lt;p&gt;IoT business decision makers may not be fully aware of the technological capabilities and limitations of IoT and their chosen &lt;a href="https://www.embedthis.com/blog/iot/what-is-an-iot-platform.html" rel="noopener noreferrer"&gt;IoT platform&lt;/a&gt; — so committing up front to absolute requirements, scope and architecture can be difficult. A vital tool to help answer the unknowns is the ability to quickly prototype the entire solution, expose key design concepts to reduce risk and then evolve the concept. &lt;/p&gt;

&lt;p&gt;An incremental approach greatly increases the odds of success – a recent survey found companies using phased IoT rollouts experience ~40% higher success rates than those going for full-scale launches at once&lt;sup id="fnref9"&gt;9&lt;/sup&gt;. &lt;/p&gt;

&lt;p&gt;To address this, the &lt;a href="https://www.embedthis.com/ioto/" rel="noopener noreferrer"&gt;EmbedThis Ioto&lt;/a&gt; device management &lt;a href="https://www.embedthis.com/blog/iot/what-is-an-iot-meta-platform.html" rel="noopener noreferrer"&gt;meta-platform&lt;/a&gt; provides a rapid prototyping environment that helps business stakeholders quickly visualize and refine their IoT strategy — long before committing to large-scale implementation. With Ioto, developers can instantiate device clouds in minutes, emulate target hardware, and create custom apps rapidly — all before committing to a particular architectural implementation. This allows key design assumptions and constraints to be tested early and at low cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Easing Integration
&lt;/h3&gt;

&lt;p&gt;Creating and integrating all the necessary components for an IoT solution is challenging. It typically involves designing, creating and integrating device hardware, firmware, device clouds, networking, cloud services, data modeling, analytics and device apps. &lt;/p&gt;

&lt;p&gt;A trend among successful IoT deployments is the use of modular, scalable products and services to simplify development. Rather than reinventing the wheel, companies are leveraging proven IoT platforms such as AWS IoT and &lt;a href="https://www.embedthis.com/ioto/" rel="noopener noreferrer"&gt;EmbedThis Ioto&lt;/a&gt;. These handle the heavy lifting of connectivity, security, data ingestion, storage, integration, analytics and visualization. This can drastically cut time-to-market and avoid integration pitfalls. Gartner research indicates organizations using standardized IoT communication protocols lower their integration costs by up to 30% and improve deployment time by 25%&lt;sup id="fnref9"&gt;9&lt;/sup&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.embedthis.com/ioto/" rel="noopener noreferrer"&gt;EmbedThis Ioto&lt;/a&gt; is a &lt;a href="https://www.embedthis.com/blog/iot/what-is-an-iot-meta-platform.html" rel="noopener noreferrer"&gt;meta-platform&lt;/a&gt; that builds upon the &lt;a href="https://www.embedthis.com/blog/iot/what-is-an-iot-platform.html" rel="noopener noreferrer"&gt;AWS IoT platform&lt;/a&gt; to offer an end-to-end device management solution. Ioto provides device agent, device cloud, device builder service and device apps. All components are designed to work together and to fully integrate with the underlying AWS IoT platform. This reduces — and often eliminates — the need for lengthy integration cycles and greatly reduces development cost and integration risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ensuring Scalability
&lt;/h2&gt;

&lt;p&gt;Scalability issues often surface late in an IoT project when design and architectural decisions are well "baked in". Fixing these issues by changing design is costly in both time and budget.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.embedthis.com/ioto/" rel="noopener noreferrer"&gt;Ioto&lt;/a&gt; has been designed from the beginning to scale to the very largest installations. Device populations up to and beyond 10 million devices are fully supported. By using the huge scale of the underlying AWS IoT platform and by utilizing built-for-scale technologies such as AWS DynamoDB and serverless Lambda compute, the Ioto platform can scale to support the largest IoT workloads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Designing for Security
&lt;/h2&gt;

&lt;p&gt;Given that security is a make-or-break factor, successful IoT initiatives now take a “secure by design” approach. This means building in encryption, authentication, network security, and continuous monitoring from the start, rather than bolting security on later. Companies that achieve IoT success tend to proactively address security and privacy – e.g. using end-to-end encryption, robust identity management for devices, and thorough testing for vulnerabilities&lt;sup id="fnref2"&gt;2&lt;/sup&gt;. This reduces the risk of costly breaches or compliance failures down the road. Similarly, designing for reliable connectivity and device management is crucial. &lt;/p&gt;

&lt;p&gt;Unfortunately, the typical IoT attack surface is quite large, spanning from device hardware, software, networking, and cloud services to end-user device apps. Consequently, the effort and skill-set required to fully and comprehensively design a secure IoT solution is a significant obstacle. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.embedthis.com/ioto/" rel="noopener noreferrer"&gt;Ioto&lt;/a&gt; addresses this gap by providing a tested, end-to-end device management solution and applies proven security design patterns across the stack in all components including: the device agent, device communications, cloud service, device data storage and end-user apps. With Ioto, your data is stored in a dedicated private AWS database in a local AWS IoT region of your choosing. Your device data never transits another service or network and goes directly from the device to device cloud. Device data is reliably stored with 6 replicas and full point-in-time backups. Your compute logic runs in 3 AWS availability zones for maximum uptime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bridging the Skills Gap
&lt;/h2&gt;

&lt;p&gt;The human factor plays a critical role in IoT success. Leading initiatives prioritize assembling the right team and forging strong partnerships. Since the expertise required to deliver a robust IoT solution is often scarce or unavailable in-house, relying on existing, well-tested technologies can significantly reduce development time and minimize the impact of skills gaps.&lt;/p&gt;

&lt;p&gt;By adopting the &lt;a href="https://www.embedthis.com/ioto/" rel="noopener noreferrer"&gt;Ioto&lt;/a&gt; device management platform, you can focus your team’s efforts on the areas most aligned with your core business, reducing the need to build deep expertise across the entire IoT stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion and Outlook
&lt;/h2&gt;

&lt;p&gt;In 2025, IoT projects are gradually improving as industries learn from early missteps, but the failure rate remains unacceptably high. Only about a quarter of IoT projects today can be counted as unqualified successes, with the rest either falling short of goals or never fully launching&lt;sup id="fnref3"&gt;3&lt;/sup&gt;. The top pitfalls – unclear purpose, integration woes, scalability, security, cost overruns, and skill gaps – have become well recognized. &lt;/p&gt;

&lt;p&gt;IoT project outcomes can be significantly improved by applying the lessons of recent failures. &lt;br&gt;
Organizations that adopt end-to-end solutions like Ioto dramatically reduce complexity, risk, and time-to-market. The path to a successful IoT deployment starts with choosing the right foundation.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;a href="https://theiotmagazine.com/94-of-businesses-will-use-iot-by-the-end-of-2021-microsoft-report-cf94ad11f173#:~:text=%C2%B7%2085,have%20IoT%20projects%20in%20planning" rel="noopener noreferrer"&gt;Microsoft IoT Signals Report, 2023&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;&lt;a href="https://www.linkedin.com/pulse/why-72-iot-projects-fail-how-oems-can-beat-odds-iot83-sqnlc/" rel="noopener noreferrer"&gt;Why 72% of IoT Projects Fail and How OEMs Can Beat the Odds&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;&lt;a href="https://www.zipitwireless.com/blog/why-75-percent-iot-projects-fail#:~:text=At%20the%202017%20IoT%20World,With%20Gartner%27s%20estimate" rel="noopener noreferrer"&gt;Why 3 out of 4 IoT Projects Fail&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;&lt;a href="https://www.designrush.com/news/eseye-weighs-on-why-80-percent-of-iot-projects-do-not-get-deployed#:~:text=Surveys%20consistently%20find%20that%2080,projects%20don%27t%20reach%20successful%20deployment" rel="noopener noreferrer"&gt;Why 80% of IoT Projects Don't Get Deployed"&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;&lt;a href="https://soracom.io/blog/why-do-iot-projects-fail/#:~:text=outpace%20expectations%2C%20and%20still%20others,well%20short%20of%20their%20goals" rel="noopener noreferrer"&gt;Why Do IoT Projects Fail&lt;/a&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;&lt;a href="https://www.lexmark.com/content/dam/lexmark/documents/white-paper/y2021/Lexmark-IoT-IDC-Report.pdf#:~:text=that%2031,minimal%20payback%20that%20did%20not" rel="noopener noreferrer"&gt;IDC Industrial IoT Payback Report&lt;/a&gt;   ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;&lt;a href="https://multitech.com/how-to-measure-iot-success/#:~:text=Meeting%20business%20objectives%20with%20IoT" rel="noopener noreferrer"&gt;Beecham Research, IoT Deployment Success Metrics, 2023&lt;/a&gt;   ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;&lt;a href="https://www.lexmark.com/content/dam/lexmark/documents/white-paper/y2021/Lexmark-IoT-IDC-Report.pdf#:~:text=Challenges%20in%20Connected%20Product%20Strategies" rel="noopener noreferrer"&gt;IoT Analytics: "Top 10 IoT Security Issues"&lt;/a&gt;   ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn9"&gt;
&lt;p&gt;&lt;a href="https://moldstud.com/articles/p-the-role-of-industryin-boosting-enterprise-iot-solutions#:~:text=To%20maximize%20the%20potential%20of,seamless%20integration%20of%20emerging%20technologies" rel="noopener noreferrer"&gt;Industry Impact on Advancing Iot Solutions"&lt;/a&gt;   ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>iot</category>
      <category>embedded</category>
      <category>ai</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Ioto Device Management for Volume Device Builders</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Fri, 10 May 2024 06:32:07 +0000</pubDate>
      <link>https://dev.to/embedthis/ioto-device-management-for-volume-device-builders-2ol9</link>
      <guid>https://dev.to/embedthis/ioto-device-management-for-volume-device-builders-2ol9</guid>
      <description>&lt;p&gt;The proliferation of Internet of Things (IoT) devices in our daily lives has led device manufacturers to seek ways to incorporate cloud-based management into their products. However, building a cloud-based device management solution is a complex task that requires expertise in various domains, including embedded development, communications, cloud computing, and user interface (UI) and user experience (UX) design. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To create a solution using Amazon Web Services (AWS) IoT, for instance, requires knowledge of numerous AWS services. Therefore, it can be challenging to develop a cloud-based device management solution that is secure, scalable, and cost-effective to maintain.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So we created &lt;strong&gt;Ioto&lt;/strong&gt; to be the most secure, scalable IoT solution for volume device builders.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Ioto
&lt;/h2&gt;

&lt;p&gt;With over two decades of experience in device management and the development of device agents, EmbedThis has a wealth of knowledge in creating secure and effective solutions for device builders of all sizes.&lt;/p&gt;

&lt;p&gt;Our GoAhead and Appweb embedded web servers are two of the most popular device agents and have been widely used in the industry. As cloud-based remote management becomes increasingly important, we wanted to create the best possible platform for scalable and secure device management both locally and via the cloud.&lt;/p&gt;

&lt;p&gt;The result is Ioto, a comprehensive cloud-based solution for device-based and cloud-based management that includes an embedded device agent, cloud device management service, device builder portal, and user device manager port.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Unique about Ioto?
&lt;/h2&gt;

&lt;p&gt;The EmbedThis Ioto solution is a complete, end-to-end device management solution that includes an embedded device agent, cloud device services, builder portal, and user device managers. It is built upon the reliable and secure AWS IoT infrastructure, ensuring that it is scalable, cost-effective, and secure for device management. Ioto offers a comprehensive platform for managing devices locally and remotely via the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ioto Solution
&lt;/h2&gt;

&lt;p&gt;Ioto includes the following core components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ioto Device Agent&lt;/li&gt;
&lt;li&gt;Ioto Device Manager&lt;/li&gt;
&lt;li&gt;Device Builder Portal&lt;/li&gt;
&lt;li&gt;Device Clouds&lt;/li&gt;
&lt;li&gt;Ioto Device Service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;strong&gt;Ioto Device Agent&lt;/strong&gt; is embedded in devices and communicates securely with the &lt;strong&gt;Ioto Device Manager&lt;/strong&gt; for local management or via the &lt;strong&gt;Ioto Cloud Service&lt;/strong&gt; for cloud-based management.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Ioto Device Manager&lt;/strong&gt; is a customizable, white-labeled device management platform for users to monitor and manage their devices. It can be branded with the your own logo and design elements to create a seamless experience for your customers. It can be embedded in your device for local device-based management or hosted in the cloud for cloud-based management.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Device Builder Portal&lt;/strong&gt; is a tool that allows device manufacturers to design, configure, and manage their device management solutions. It is used to subscribe and download device agents, create &lt;strong&gt;Device Clouds&lt;/strong&gt; and manage manufactured devices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafzyv0ppwx11n898bqud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafzyv0ppwx11n898bqud.png" alt="Home" width="800" height="572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Ioto Device Agent
&lt;/h2&gt;

&lt;p&gt;Embedthis Ioto is a small but powerful embedded agent for local and remote device management. It boasts impressive speed and a comprehensive range of management protocols and capabilities.&lt;/p&gt;

&lt;p&gt;Ioto includes an HTTP web server, embedded database, MQTT client, HTTP client, JSON parsing, AWS IoT cloud integration, easy provisioning, and OTA upgrading. It can be used for local management through its embedded web server or integrated with the cloud through comprehensive AWS IoT integration. Ioto offers a versatile and flexible solution for managing devices in a variety of environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0v8up8ctnujnf8yw54yl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0v8up8ctnujnf8yw54yl.png" alt="Ioto Agent" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition to its range of features and capabilities, Ioto also has a very small memory footprint of only 200K of code, making it ideal for use on Linux and FreeRTOS systems. It can also be easily ported to other platforms, providing flexibility and versatility for device management on a variety of systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Agent Components
&lt;/h3&gt;

&lt;p&gt;Ioto provides the following components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP/1.1 server with dynamic rendering, authentication, cookies, sessions and file upload&lt;/li&gt;
&lt;li&gt;HTTP/1.1 client&lt;/li&gt;
&lt;li&gt;MQTT/3.1.1 client&lt;/li&gt;
&lt;li&gt;Embedded database&lt;/li&gt;
&lt;li&gt;JSON/5 parser and query engine&lt;/li&gt;
&lt;li&gt;Transport Layer Security (TLS/SSL) with ALPN support&lt;/li&gt;
&lt;li&gt;AWS IoT Integration with AWS IoT and AWS DynamoDB&lt;/li&gt;
&lt;li&gt;AWS service integration with S3, Lambda, Kinesis and CloudWatch&lt;/li&gt;
&lt;li&gt;Transparent database synchronization to AWS DynamoDB (like Global Tables)&lt;/li&gt;
&lt;li&gt;Safe, secure runtime core&lt;/li&gt;
&lt;li&gt;Streamlined certificate and key provisioning&lt;/li&gt;
&lt;li&gt;Over-The-Air updates and upgrades&lt;/li&gt;
&lt;li&gt;User authentication management&lt;/li&gt;
&lt;li&gt;Complete documentation and samples&lt;/li&gt;
&lt;li&gt;Full Source code&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Ioto Device Manager
&lt;/h3&gt;

&lt;p&gt;The Ioto solution provides a configurable web app from which your users can monitor and manage their devices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25synozk3dh7y8e6bn4f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25synozk3dh7y8e6bn4f.png" alt="Device Manager" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Device Manager can be embedded in your device for local management or deployed from the cloud for cloud-based management. The Builder portal is used to create and configure device managers. &lt;/p&gt;

&lt;p&gt;The Ioto Manager is a generic (white-label) cloud-based device manager that can be extensively customized to manage your devices with your logo, product name, color and font theme, device data and device specific screens and interface including browser-based and cloud-side custom logic.&lt;/p&gt;

&lt;p&gt;The Ioto Manager is extremely flexible, however, there are limits, and you may eventually want to create your own manager application from the ground up, that uses the Ioto APIs to provide a bespoke management experience for your devices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AWS IoT for Cloud-based Management?
&lt;/h2&gt;

&lt;p&gt;EmbedThis has chosen AWS IoT as the foundation for its Ioto cloud management solution for several key reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Scalability: AWS IoT can handle a large number of devices and handle the data generated by them, making it suitable for use in large-scale IoT deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security: AWS IoT has built-in security measures such as encryption, authentication, and access controls to protect device data and communication. Further, AWS supports over two dozen regions, so your data can be hosted in your AWS account near you.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integration with other AWS services: AWS IoT can be easily integrated with other AWS services such as Amazon Kinesis, Amazon S3, and Amazon Machine Learning, allowing for further processing and analysis of device data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cost-effectiveness: AWS IoT offers a pay-as-you-go pricing model, allowing users to only pay for the resources they consume, making it cost-effective for device management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reliability: AWS has a proven track record of reliability, with multiple availability zones and disaster recovery measures in place to ensure smooth operation of its services.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Ioto, you get the best of both worlds. A complete end-to-end IoT solution and the rock-solid foundation of AWS IoT.&lt;/p&gt;

&lt;h2&gt;
  
  
  Want More?
&lt;/h2&gt;

&lt;p&gt;To learn more about EmbedThis Ioto, please read:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/ioto/" rel="noopener noreferrer"&gt;Ioto Web Site&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://admin.embedthis.com" rel="noopener noreferrer"&gt;Ioto Download&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/" rel="noopener noreferrer"&gt;Embedthis Web Site&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>iot</category>
      <category>embedded</category>
      <category>aws</category>
    </item>
    <item>
      <title>Parallelism via Fiber Coroutines</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Tue, 19 Sep 2023 05:08:39 +0000</pubDate>
      <link>https://dev.to/embedthis/parallelism-via-fiber-coroutines-4l1h</link>
      <guid>https://dev.to/embedthis/parallelism-via-fiber-coroutines-4l1h</guid>
      <description>&lt;p&gt;Server applications such as web servers often need to service multiple requests in parallel. This can be achieve by using threads, event callbacks or via fiber coroutines. While &lt;strong&gt;fiber coroutines&lt;/strong&gt; are less well known and understood, they have some compelling reasons to be uses instead of threads or callbacks.&lt;/p&gt;

&lt;p&gt;A fiber coroutine is a code segment that runs with its own stack and cooperatively yields to other fibers when it needs to wait. Fibers can be viewed as threads, but only one fiber runs at a time. For Go programmers, fibers are similar to Go routines, while for JavaScript developers, fibers are comparable to async/await.&lt;/p&gt;

&lt;p&gt;The Ioto embedded web server's core uses fiber coroutines to serve multiple requests in parallel. Ioto is based on a single-threaded fiber coroutine architecture that employs a non-blocking, event-driven design capable of handling numerous inbound and outbound requests simultaneously with minimal CPU and memory resources. Ioto simplifies programming by eliminating the complexity of threads and the inelegance of event callbacks through the use of fiber coroutines. &lt;/p&gt;

&lt;p&gt;Ioto's fibers are integrated into the I/O system, enabling parallelism to be effortlessly supported.  All Ioto services support fibers, making your user extension code straightforward, easy to debug, and maintainable in the long term.  You can use a straight-line procedural programming model to read and write sockets, issue HTTP client requests, send MQTT messages, or respond to web server requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Models of Parallelism
&lt;/h2&gt;

&lt;p&gt;To implement parallelism in an application, a developer has three choices:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Threads&lt;/li&gt;
&lt;li&gt;Non-blocking APIs with callbacks&lt;/li&gt;
&lt;li&gt;Fiber coroutines&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Threads
&lt;/h2&gt;

&lt;p&gt;Programming with threads can be appealing at first, however a multithreaded design can be problematic. Subtle programming errors due to timing related issues, multithread lock deadlocks and race conditions can be extraordinarily difficult to detect and diagnose. All too often, they appear only in production deployments.&lt;/p&gt;

&lt;p&gt;Although some developers excel in creating multithreaded designs, others may struggle when tasked with maintaining complex threaded code and debugging subtle race conditions and issues. Over time, a design that initially seemed reasonable can become increasingly challenging to maintain and support.&lt;/p&gt;

&lt;h2&gt;
  
  
  Callbacks
&lt;/h2&gt;

&lt;p&gt;An alternative method for implementing parallelism involves the use of non-blocking APIs coupled with callbacks, which are often easier to test and debug compared to threaded designs. However, this approach often leads to decreased code quality due to the prevalent "callback-hell" phenomenon. In such cases, relatively simple algorithms become obfuscated as they are dispersed across cascading callbacks. This problem is especially pronounced in C or C++ coding designs that lack inline lambda functions for simplification. Consequently, linear algorithms are fragmented across multiple functions, and clear algorithms become increasingly difficult to decipher.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fiber Coroutines
&lt;/h2&gt;

&lt;p&gt;An appealing alternative for implementing parallelism is through the use of Fiber Coroutines. A Fiber coroutine refers to code that runs with its stack and cooperatively yields to other fibers when it needs to wait.&lt;/p&gt;

&lt;p&gt;Fibers can be thought of as threads, but only one fiber runs at a time, eliminating the need for thread locking or synchronization. For Go programmers, fibers are akin to Go routines, while for JavaScript developers, fibers are similar to async/await.&lt;/p&gt;

&lt;p&gt;By allowing programs to overlap waiting for I/O or other events with useful compute tasks, fibers achieve parallelism without the complexities involved in other methods.&lt;/p&gt;

&lt;p&gt;Fibers address the primary issue with multi-threaded programming where multiple threads access the same data at the same time, requiring complex locking to safeguard data integrity. Furthermore, they resolve the primary problem with non-blocking callbacks by enabling a procedural straight-line coding style.&lt;/p&gt;

&lt;p&gt;Although not flawless, fibers provide an efficient solution for achieving parallelism. It may not allow full utilization of all the CPU cores of a system within one program. However, for embedded device management, this is generally not a significant concern. Since device management applications are usually secondary to the primary role of the device, they should not monopolize the CPU cores of the device.&lt;/p&gt;

&lt;h2&gt;
  
  
  Parallelism Compared
&lt;/h2&gt;

&lt;p&gt;Consider a &lt;strong&gt;threaded&lt;/strong&gt; example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;pthread_mutex_t&lt;/span&gt; &lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;increment&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;pthread_mutex_lock&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;pthread_mutex_unlock&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;getCount&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;pthread_mutex_lock&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="n"&gt;pthread_mutex_unlock&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;mutex&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now consider the &lt;strong&gt;fiber&lt;/strong&gt; solution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;increment&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;getCount&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;ccount&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since only one segment of code is executing at any one time, there is no possibility of fiber collisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Callback Example
&lt;/h2&gt;

&lt;p&gt;When implementing parallelism with callbacks, applications must employ non-blocking I/O. While blocking I/O is simpler, it prohibits the application from performing any other function while waiting for I/O to complete.&lt;/p&gt;

&lt;p&gt;For instance, consider an application that must execute a REST HTTP request to retrieve some remote data. While waiting for the request to complete, the application is blocked and cannot perform any other task for several seconds.  Non-blocking I/O resolves this issue, but creates another problem known as "callback hell".&lt;/p&gt;

&lt;p&gt;Consider this pseudo-example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="c1"&gt;//  Issue a request and invoke the onData callback on completion&lt;/span&gt;
&lt;span class="n"&gt;httpFetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"https://www.example.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;onData&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;//  First Callback&lt;/span&gt;
&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;onData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;HttpResult&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;//  Invoke another request&lt;/span&gt;
        &lt;span class="n"&gt;httpFetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"https://www.backup.com/);&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="s"&gt;    }&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="s"&gt;}&lt;/span&gt;&lt;span class="err"&gt;

&lt;/span&gt;&lt;span class="s"&gt;//  Second Callback&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="s"&gt;static void onComplete(HttpResult *result)&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="s"&gt;{&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="s"&gt;    //  Now we done and can process the result&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="s"&gt;}&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the level of callback nesting increases, the code's intended purpose rapidly gets obscured.&lt;/p&gt;

&lt;p&gt;The alternative Ioto code using &lt;strong&gt;fiber coroutines&lt;/strong&gt; would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;urlGet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"https://www.example.com"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;urlGet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"https://www.backup.com/"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The calls to urlGet will yield and other fibers will run while waiting for I/O. When the request completes, this fiber is transparently resumed and execution continues.&lt;/p&gt;

&lt;p&gt;Code based on Fibers is more straightforward to code, debug, and maintain. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When transitioning Ioto from callbacks to fibers, several of our algorithms reduced code lines by over 30%.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Ioto Fibers in Practice
&lt;/h2&gt;

&lt;p&gt;In practice, when working with Ioto, there is usually no need to explicitly program fiber yielding or resuming. The Ioto socket APIs are fiber-aware and will handle the yielding for you. &lt;/p&gt;

&lt;p&gt;All Ioto services, including the web server, Url client, MQTT client, and AWS services, feature async APIs that are fiber-aware and will yield and resume automatically.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="kt"&gt;char&lt;/span&gt; &lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;nbytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rSocketRead&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sock&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;sizeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Got body data %.*s&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;nbytes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Ioto I/O API
&lt;/h2&gt;

&lt;p&gt;Ioto builds fiber support into the lowest layer of the "R" portable runtime. The following APIs support automatic fiber yielding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="//../ref/api/r.md#r_8h_1a5e68016e4b9381eb07d94855361e4a6d"&gt;rReadSocket&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="//../ref/api/r.md#r_8h_1a59d42a597c69a42387f41d62f0e8c5b2"&gt;rWriteSocket&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="//../ref/api/r.md#r_8h_1a35b19891b3c32f496ee52b157cae938a"&gt;rSleep&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The rReadSocket and rWriteSocket APIs will yield and resume the current fiber as required, allowing other fibers to continue running. It is important to note that only one fiber will execute at a time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fiber API
&lt;/h2&gt;

&lt;p&gt;Ioto also supports a low level fiber API so you can construct your own fiber-enabled primitives.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="http://www.embedthis.com/ref/api/r.md#r_8h_1a531c892493b60bb2088705d7f4e447cb" rel="noopener noreferrer"&gt;rYieldFiber&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.embedthis.com/ref/api/r.md#r_8h_1a059333256cfab39b5037149625e1133b" rel="noopener noreferrer"&gt;rResumeFiber&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.embedthis.com/ref/api/r.md#r_8h_1a116c72a151fb75665eaef53222bcae37" rel="noopener noreferrer"&gt;rSpawnFiber&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use &lt;strong&gt;rYieldFiber&lt;/strong&gt; to yield the CPU and switch to another fiber. You must make alternate arrangements to call &lt;strong&gt;rResumeFiber&lt;/strong&gt; when required.&lt;/p&gt;

&lt;p&gt;Use &lt;strong&gt;rSpawnFiber&lt;/strong&gt; to create a new fiber and immediately switch to it. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;myFiberFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;//  code here runs inside a fiber&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;rSpawnFiber&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;myFiberFunction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Integrating with External Services
&lt;/h2&gt;

&lt;p&gt;But what should you do if you need to invoke an external service that will block?&lt;/p&gt;

&lt;p&gt;You have two alternatives:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Non-Blocking APIs&lt;/li&gt;
&lt;li&gt;Use threads&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Using Non-Blocking with External Services
&lt;/h3&gt;

&lt;p&gt;Ioto provides a flexible centralized eventing and waiting mechanism that can support any service that provides a select() compatible file descriptor.&lt;/p&gt;

&lt;p&gt;If the external service has a non-blocking API and provides a file descriptor that is compatible with select or epoll, you can use the Ioto runtime &lt;strong&gt;wait&lt;/strong&gt; APIs to be signaled when the external service is complete.&lt;/p&gt;

&lt;p&gt;To wait for I/O on a file descriptor, call &lt;strong&gt;rAllocWait&lt;/strong&gt; to create a wait object and &lt;strong&gt;rSetWaitHandler&lt;/strong&gt; to nominate an event function to invoke.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="n"&gt;wait&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rAllocWait&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fd&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;rSetWaitHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;R_READABLE&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The nominated function will be run on a fiber coroutine when I/O on the file descriptor (fd) is ready.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Threads with External Services
&lt;/h3&gt;

&lt;p&gt;The other option is to create a thread. However you must take care to properly yield the fiber first. The runtime provides a convenient &lt;strong&gt;rSpawnThread&lt;/strong&gt; API that will do this for you. It will create a thread, yield the current fiber and then invoke your threadMain. When your threadMain exits, it will automatically resume the fiber.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="n"&gt;rSpawnThread&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;threadMain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;threadMain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;arg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;getFromExternalService&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Manual Yield and Resume
&lt;/h2&gt;

&lt;p&gt;Though unlikely, you may have a need to manually create fibers and yield and resume explicitly.&lt;/p&gt;

&lt;p&gt;The APIs for this are: &lt;strong&gt;rAllocFiber&lt;/strong&gt;, &lt;strong&gt;rYieldFiber&lt;/strong&gt; and &lt;strong&gt;rResumeFiber&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;See the &lt;a href="//../../ref/api/r/"&gt;Runtime API&lt;/a&gt; for more details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Ioto eliminates the complexity of threads and verbosity of callbacks by using fiber coroutines. The result is a simple, highly efficient design that simplifies implementing and debugging IoT and embedded services. &lt;/p&gt;

&lt;h2&gt;
  
  
  Want More Now?
&lt;/h2&gt;

&lt;p&gt;To learn more about EmbedThis Ioto, please read:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/ioto/" rel="noopener noreferrer"&gt;Ioto Web Site&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/ioto/doc/" rel="noopener noreferrer"&gt;Ioto Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://admin.embedthis.com/" rel="noopener noreferrer"&gt;Ioto Agent Download&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/builder/doc/" rel="noopener noreferrer"&gt;Builder Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.embedthis.com/" rel="noopener noreferrer"&gt;Embedthis Web Site&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>embedded</category>
      <category>threads</category>
      <category>iot</category>
    </item>
    <item>
      <title>CustomMetrics -- Simple, Cost-Effective Metrics for AWS</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Tue, 19 Sep 2023 04:49:14 +0000</pubDate>
      <link>https://dev.to/embedthis/custommetrics-simple-cost-effective-metrics-for-aws-2g0</link>
      <guid>https://dev.to/embedthis/custommetrics-simple-cost-effective-metrics-for-aws-2g0</guid>
      <description>&lt;p&gt;AWS CloudWatch offers metrics for monitoring specific aspects of your applications. However, AWS custom metrics can become costly when updated or queried frequently, with each custom metric costing up to &lt;strong&gt;$3.60 per metric per year&lt;/strong&gt;, along with additional expenses for querying. If you have a significant number of metrics or high dimensionality in your metrics, this could result in a substantial CloudWatch Metrics bill.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;On the other hand, &lt;strong&gt;CustomMetrics&lt;/strong&gt; presents a cost-effective alternative metrics API that is considerably more budget-friendly and efficient compared to standard CloudWatch metrics.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The cost savings achieved by CustomMetrics are primarily due to its focus on providing only the most recent period metrics, such as those from the last day, last month, last hour, last 5 minutes, and so on. These metric timespans are also fully configurable. This approach ensures that each metric can be saved, stored, and queried with minimal cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS CloudWatch Metrics
&lt;/h2&gt;

&lt;p&gt;Frequently, users complain that CloudWatch is one of the most expensive parts of their AWS bill. For those users that employ custom metrics with high update frequency or high dimensionality, this can quickly translate into a large bill.&lt;/p&gt;

&lt;p&gt;CloudWatch charges based on the number of metrics sent to the service and the frequency of updating or querying metrics. Your bill will increase as you send more metrics to CloudWatch and make API calls more frequently. For a regularly updated or queried metric, you will pay $0.30 per metric per month for the first $3,000 per month. &lt;/p&gt;

&lt;p&gt;AWS metrics are expensive because they store metrics with arbitrary data spans. You can query metrics for any desired period (with decreasing granularity as the metrics age). On the positive side of the ledger, you can use CloudWatch EMF log format to emit metrics from Lambda's without invoking an API. But you still pay for maintenance of the metric.&lt;/p&gt;

&lt;h2&gt;
  
  
  CustomMetrics Alternative
&lt;/h2&gt;

&lt;p&gt;CustomMetrics foregoes the option for arbitrary date queries and provides "latest" period metrics only. When using CustomMetrics, you request data for specific recent time spans such as the last "5 minutes," "hour," "day," "week," "month," or "year." You have the flexibility to configure these "last" time spans according to your preferences, but they are always based on the current time. For example, you could record metrics for the last "minute" and "15 minutes".&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In exchange for its exclusive emphasis on the most recent metrics, CustomMetrics can store and retrieve metrics at a significantly lower cost compared to CloudWatch custom metrics.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  CustomMetrics Features
&lt;/h2&gt;

&lt;p&gt;CustomMetrics is a &lt;a href="https://www.npmjs.com/package/custom-metrics" rel="noopener noreferrer"&gt;NodeJS&lt;/a&gt; library designed to emit and query custom metrics for AWS applications. Is offers the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Simple one line API to emit metrics from any NodeJS TypeScript or JavaScript app.&lt;/li&gt;
&lt;li&gt;  Similar metric model to AWS CloudWatch for supporting namespaces, metrics, dimensions, statistics and intervals.&lt;/li&gt;
&lt;li&gt;  Computes statistics for: average, min, max, count and sum.&lt;/li&gt;
&lt;li&gt;  Computes P value statistics with configurable P value resolution.&lt;/li&gt;
&lt;li&gt;  Supports a default metric intervals of: last 5 mins, hour, day, week, month and year.&lt;/li&gt;
&lt;li&gt;  Configurable custom intervals for different metric timespans and intervals.&lt;/li&gt;
&lt;li&gt;  Fast and flexible metric query API.&lt;/li&gt;
&lt;li&gt;  Query API can return data points or aggregate metric data to a single statistic.&lt;/li&gt;
&lt;li&gt;  Scalable to support many simultaneous clients emitting metrics.&lt;/li&gt;
&lt;li&gt;  Stores data in any existing DynamoDB table and coexists with existing app data.&lt;/li&gt;
&lt;li&gt;  Supports multiple services, apps, namespaces and metrics in a single DynamoDB table.&lt;/li&gt;
&lt;li&gt;  Extremely fast initialization time.&lt;/li&gt;
&lt;li&gt;  Written in TypeScript with full TypeScript support.&lt;/li&gt;
&lt;li&gt;  Clean, readable, small, TypeScript code base (~1.3K lines).&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.sensedeep.com" rel="noopener noreferrer"&gt;SenseDeep&lt;/a&gt; support for visualizing and graphing metrics.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://www.npmjs.com/package/dynamodb-onetable" rel="noopener noreferrer"&gt;DynamoDB Onetable&lt;/a&gt; support CustomMetrics for detailed single table metrics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts2vcpneezuf73mm3c39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts2vcpneezuf73mm3c39.png" alt="Custom Metrics" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Database
&lt;/h2&gt;

&lt;p&gt;CustomMetrics stores each metric in a single, compressed DynamoDB item. Each metric stores the optimized data points for the metric's timespans. The default spans are 5 mins, 1 hour, 1 day, 1 week, 1 month and 1 year. But these can be configured for each CustomMetric instance.&lt;/p&gt;

&lt;p&gt;Emitting a metric via the &lt;code&gt;emit&lt;/code&gt; API will write the metric via a DynamoDB item update. Multiple simulataneous clients can update the same metrics, and CustomMetrics will ensure no data is lost.&lt;/p&gt;

&lt;p&gt;If optimized &lt;a href="https://github.com/sensedeep/custom-metrics#buffering" rel="noopener noreferrer"&gt;metric buffering&lt;/a&gt; is enabled, metric updates may be aggregated according to your buffering policy to minimize the database write load. &lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Tour
&lt;/h2&gt;

&lt;p&gt;Here is a quick tour of CustomMetrics demonstrating how to install, configure and use it in your apps.&lt;/p&gt;

&lt;p&gt;First install the library using npm:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i custom-metrics
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Import the CustomMetrics library. If you are not using ES modules or TypeScript, use &lt;code&gt;require&lt;/code&gt; to import the library.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;CustomMetrics&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;CustomMetrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next create and configure the CustomMetrics instance by nominating the DynamoDB table and key structure to hold your metrics.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CustomMetrics&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;MyTable&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;primaryKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;sortKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Metrics are stored in the DynamoDB database referenced by the &lt;strong&gt;table&lt;/strong&gt; name in the desired region. This table can be your existing application DynamoDB table and metrics can safely coexist with your app data.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;primaryKey&lt;/strong&gt; and &lt;strong&gt;sortKey&lt;/strong&gt; are the primary and sort keys for the main table index. These default to 'pk' and 'sk' respectively. CustomMetrics does not support tables without a sort key.&lt;/p&gt;

&lt;p&gt;If you have an existing AWS SDK V3 DynamoDB client instance, you can use that with the CustomMetrics constructor. This will have slightly faster initialization time than simply providing the table name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;DynamoDBClient&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@aws-sdk/client-dynamodb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dynamoDbClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DynamoDBClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CustomMetrics&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;myDynamoDbClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;MyTable&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;primaryKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;sortKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Emitting Metric Data
&lt;/h2&gt;

&lt;p&gt;You can emit metrics via the &lt;code&gt;emit&lt;/code&gt; API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Acme/Metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;launches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will emit the &lt;code&gt;launches&lt;/code&gt; metric in the &lt;code&gt;Acme/Metrics&lt;/code&gt; namespace with the value of &lt;strong&gt;10&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;A metric can have dimensions that are unique metric values for specific instances. For example, we may want to count the number of launches for a specific rocket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Acme/Metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;launches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;rocket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;saturnV&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The metric will be emitted for each dimension provided. A dimension may have one or more properties. A metric can also be emitted for multiple dimensions. &lt;/p&gt;

&lt;p&gt;If you want to emit a metric over all dimensions, you can add {}. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Acme/Metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;launches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{},&lt;/span&gt; 
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;rocket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;saturnV&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Acme/Metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;launches&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{},&lt;/span&gt; 
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;rocket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;falcon9&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will emit a metric that is a total of all launches for all rocket types.&lt;/p&gt;

&lt;h2&gt;
  
  
  Query Metrics
&lt;/h2&gt;

&lt;p&gt;To query a metric, use the &lt;code&gt;query&lt;/code&gt; method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Acme/Metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;speed&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;rocket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;saturnV&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;max&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will retrieve the &lt;code&gt;speed&lt;/code&gt; metric from the &lt;code&gt;Acme/Metrics&lt;/code&gt; namespace for the &lt;code&gt;{rocket == 'saturnV'}&lt;/code&gt; dimension. The data points returned will be the maximum speed measured over the day's launches (86400 seconds).&lt;/p&gt;

&lt;p&gt;This will return data like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"namespace"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Acme/Metrics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"metric"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"launches"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dimensions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"rocket"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"saturnV"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"spans"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"end"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;946648800&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"period"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"samples"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"points"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"sum"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;24000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"count"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;19&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"min"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"max"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to query the results as a single value over the entire period (instead of as a set of data points), set the &lt;code&gt;accumulate&lt;/code&gt; options to true.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Acme/Metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;speed&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;rocket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;saturnV&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;max&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;accumulate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will return a single maximum speed over the last day.&lt;/p&gt;

&lt;p&gt;To obtain a list of metrics, use the &lt;code&gt;getMetricList&lt;/code&gt; method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;MetricList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getMetricList&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will return an array of available namespaces in &lt;strong&gt;list.namespaces&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To get a list of the metrics available for a given namespace, pass the namespace as the first argument.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;MetricList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getMetricList&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Acme/Metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will return a list of metrics in &lt;strong&gt;list.metrics&lt;/strong&gt;. Note: this will return the namespaces and metrics for any namespace that begins with the given namespace. Consequently, all namespaces should be unique and not be substrings of another namespace.&lt;/p&gt;

&lt;p&gt;To get a list of the dimensions available for a metric, pass in a namespace and metric.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;MetricList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getMetricList&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Acme/Metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;speed&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will also return a list of dimensions in &lt;strong&gt;list.dimensions&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Metrics Tenants
&lt;/h2&gt;

&lt;p&gt;You can scope metrics by chosing unique namespaces for different applications or services, or by using various dimensions for applications/services. This is the preferred design pattern.&lt;/p&gt;

&lt;p&gt;You can also scope metrics by selecting a unique &lt;code&gt;owner&lt;/code&gt; property via the CustomMetrics constructor. This property is used, in the primary key of metric items. This owner defaults to &lt;strong&gt;'default'&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cartMetrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CustomMetrics&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cart&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;table&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;MyTable&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;primaryKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;sortKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/sensedeep/custom-metrics" rel="noopener noreferrer"&gt;CustomMetrics Repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/custom-metrics" rel="noopener noreferrer"&gt;CustomMetrics NPM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep Serverless Developer Studio&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Participate
&lt;/h3&gt;

&lt;p&gt;All feedback, discussion, contributions and bug reports are very welcome.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://github.com/sensedeep/CustomMetrics/discussions" rel="noopener noreferrer"&gt;Discussions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://github.com/sensedeep/CustomMetrics/issues" rel="noopener noreferrer"&gt;Issues&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Use Case
&lt;/h3&gt;

&lt;p&gt;We've used Custom Metrics extensively in our &lt;a href="https://www.embedthis.com/ioto/" rel="noopener noreferrer"&gt;EmbedThis Ioto&lt;/a&gt; IoT middleware service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Contact
&lt;/h3&gt;

&lt;p&gt;You can contact me (Michael O'Brien) on Twitter at: &lt;a href="https://twitter.com/mobstream" rel="noopener noreferrer"&gt;@mobstream&lt;/a&gt;, and read my &lt;a href="https://www.sensedeep.com/blog" rel="noopener noreferrer"&gt;Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>metrics</category>
      <category>lambda</category>
      <category>aws</category>
    </item>
    <item>
      <title>How to debug serverless apps</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Tue, 29 Mar 2022 03:56:52 +0000</pubDate>
      <link>https://dev.to/embedthis/how-to-debug-serverless-apps-58g1</link>
      <guid>https://dev.to/embedthis/how-to-debug-serverless-apps-58g1</guid>
      <description>&lt;h2&gt;
  
  
  Debugging serverless apps is different and difficult.
&lt;/h2&gt;

&lt;p&gt;If you have not properly prepared, debugging some requests may be impossible with your current level of instrumentation and tooling.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So what is the best way to debug serverless apps and serverless requests?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The blog post describes how we debug the serverless backend for our &lt;a href="https://www.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep Serverless Developer Studio&lt;/a&gt; service and what tools we use with our NodeJS serverless apps.&lt;/p&gt;

&lt;p&gt;Some of the suggestions here may not be suitable for all sites, but I hope they give you ideas to improve your ability to debug your serverless apps.&lt;/p&gt;

&lt;p&gt;But first, some background.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Computing Changes Things
&lt;/h2&gt;

&lt;p&gt;The way enterprises design, debug, and ship applications changed forever when serverless computing arrived on the scene. The introduction of serverless allowed developers to build and ship much faster, which in turn allows them to concentrate on coding rather than maintenance, auto-scaling, and server provisioning.&lt;/p&gt;

&lt;p&gt;Serverless is now mainstream and it offers many compelling advantages, but debugging serverless apps is still a tough problem as the required tools have not kept pace.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is different about serverless debugging?
&lt;/h2&gt;

&lt;p&gt;Debugging a server-based app (monolith) is well understood and assisted by a suite of refined tools that have been created over decades of evolution. Debugging can be performed using IDEs either locally or remotely over SSH and other protocols. Importantly, server state is typically persisted after requests to permit live or after-the-fact inspection and debugging.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Serverless is not so lucky&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Serverless is different. It is often more complex with multiple interacting, loosely coupled components. Instead of a monolith, it is more like a spider's web and consequently harder to follow a single request with no ability for live debugging or setting breakpoints to intercept code execution.&lt;/p&gt;

&lt;p&gt;Debugging serverless via remote shell or execution is not possible. Serverless requests are ephemeral and request state is not available after execution. Once a request has been completed, there is no state left behind to examine.&lt;/p&gt;

&lt;p&gt;So when a serverless request errors, it often fails silently or any record is buried in a mountain of log data in CloudWatch. You will not be proactively notified and if you are informed by the end-user, finding the failed request is a "needle in a haystack" type of problem.&lt;/p&gt;

&lt;p&gt;Furthermore, comprehensive local emulation of cloud environments is either not possible or has severe limitations. As cloud services become more evolved, local emulation is becoming increasingly limited in scope and application.&lt;/p&gt;

&lt;p&gt;So we need a different technique to debug serverless compared to server-based app debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Debugging Technique
&lt;/h2&gt;

&lt;p&gt;The primary means of debugging serverless apps is via detailed, intelligent request and state logging, paired with log management to correlate log data and quickly query to locate events of interest.&lt;/p&gt;

&lt;p&gt;Whether your code is under development, is being tested as part of unit or integration tests, or in production, detailed logging is the foundation for you to see and understand what is actually happening in your code for each request.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Detailed, intelligent logging is the foundation for serverless observability.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Observability
&lt;/h2&gt;

&lt;p&gt;Observability is the ability to understand your system's unknown unknowns. i.e. the ability to understand not just expected errors, but also unforeseeable conditions. It is not enough to just log simple, plain text error messages.&lt;/p&gt;

&lt;p&gt;Our understanding of all the possible the failure modes of serverless apps is limited by the extent of our knowledge today. Serverless apps can fail in many surprising ways that we may not yet understand and cannot foresee.&lt;/p&gt;

&lt;p&gt;So to meet the debugging needs of tomorrow, you need to log much more information and context than you might realize. Your app must emit sufficient state and information to permit debugging a wide range of unexpected conditions in the future without having to stop and redeploy code.&lt;/p&gt;

&lt;p&gt;Fortunately, this is easy to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Log? Extensively.
&lt;/h2&gt;

&lt;p&gt;Often developers add diagnostic logging as an after thought via sprinkled &lt;code&gt;console.log&lt;/code&gt; messages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;console.log&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Oops, bad thing happened"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, using &lt;code&gt;console.log&lt;/code&gt; to emit simple messages is almost useless in achieving true Observability. Instead, what is required is the ability to emit log messages in a structured way with extra context that captures detailed request state.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Every time a developer uses &lt;code&gt;console.log&lt;/code&gt; an angel cries in heaven.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In addition to the log message, log events should emit additional context information. To achieve this, the output format must be structured. JSON is an ideal format as most log viewers can effectively parse JSON and understand the context emitted.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Cannot authenticate user`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would emit something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;timestamp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2022-03-26T06:41:59.216ZZ&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Cannot authenticate user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;auth&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;req&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;stack&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="cm"&gt;/* stack trace */&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are many log libraries that are capable of emitting structured log context data. For &lt;a href="https://www.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep&lt;/a&gt;, we use the &lt;a href="https://github.com/sensedeep/senselogs" rel="noopener noreferrer"&gt;SenseLogs&lt;/a&gt; library which is an exceptionally fast logging library designed for serverless. It has a flexible, simple syntax that makes adding detailed log events easy.&lt;/p&gt;

&lt;p&gt;SenseLogs emits log messages with context and stack traces in JSON. It can also be dynamically controlled as to which log messages are emitted at runtime without having to redeploy code.&lt;/p&gt;

&lt;p&gt;Here is a mock Lambda which initializes SenseLogs and emits various sample log messages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;log&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SenseLogs&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addTraceIds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Request start&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;New user login&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Queue Stats&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;q&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;backlog&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Unexpected exception with cart&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Beta&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Performance metrics for beta release features&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;metrics&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SenseLogs emits messages into channels such as &lt;code&gt;info&lt;/code&gt;, &lt;code&gt;debug&lt;/code&gt;, &lt;code&gt;error&lt;/code&gt;. These channels can be enabled or disabled via Lambda environment variables. In this way, SenseLogs can dynamically scale up or down the volume of log data according.&lt;/p&gt;

&lt;h2&gt;
  
  
  Catch All Errors
&lt;/h2&gt;

&lt;p&gt;It is very important to catch all errors and ensure they are reported. Lambdas should catch all exceptions and not rely on the default Lambda and language runtimes to catch errors. Employing a top-level catch can ensure that the exception report includes request state and correlation IDs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic Logging
&lt;/h2&gt;

&lt;p&gt;All log data has a cost and extensive logging can significantly impact the performance of your Lambdas and excessive log storage can cause rude billing surprises. Logging extensive state for all requests is typically prohibitively expensive.&lt;/p&gt;

&lt;p&gt;A better approach is to enable a solid baseline of log data and to scale up and/or focus logging when required without redeploying code. This is called &lt;strong&gt;Dynamic Logging&lt;/strong&gt; and it is important to be able to do this without the risk of redeploying modified code.&lt;/p&gt;

&lt;p&gt;SenseLogs implements Dynamic Logging by listening to a Lambda &lt;code&gt;LOG_FILTER&lt;/code&gt; environment variable. This variable may be set to a list of log channels to enable. By default, it is set to enable the following channels:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LOG_FILTER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;fatal,error,info,metrics,warn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the Lambda &lt;code&gt;LOG_FILTER&lt;/code&gt; variable is modified, the next and subsequent Lambda invocations will use the adjusted channel filter settings. In this manner, you can scale up and down the volume and focus of your log messages without redeploying code.&lt;/p&gt;

&lt;p&gt;You can use custom channels via &lt;code&gt;log.emit&lt;/code&gt; to provide log coverage for specific paths in your code and then enable those channels on demand.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;feature-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Unexpected condition, queue is empty&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="cm"&gt;/* state objects */&lt;/span&gt; 
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SenseLogs has two other environment variables which give more control over the log messages that are emitted.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LOG_OVERRIDE provides the list of channels to enable for a limited duration of time before automatically reverting to the LOG_FILTER channel set.&lt;/li&gt;
&lt;li&gt;LOG_SAMPLE provides an additional list of channels to enable for a percentage of requests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LOG_OVERRIDE is a convenient way to temporarily boost log messages that will automatically revert to the base level of logging when the duration completes. LOG_OVERRIDE is set to a Unix epoch timestamp (seconds since Jan 1 1970) when the override will expire, followed by a list of channels.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LOG_OVERRIDE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1626409530045:data,trace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;LOG_SAMPLE is set to a percentage of requests to sample followed by a list of channels to add to the list of LOG_FILTER for sampled requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LOG_SAMPLE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1%:trace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the &lt;a href="https://github.com/sensedeep/senselogs/blob/main/README.md" rel="noopener noreferrer"&gt;SenseLogs Documentation&lt;/a&gt; for more details.&lt;/p&gt;

&lt;h2&gt;
  
  
  SenseDeep
&lt;/h2&gt;

&lt;p&gt;You can modify environment variables with the AWS console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkck63kcdgecjvgw14t1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkck63kcdgecjvgw14t1e.png" alt="AWS Console" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But our SenseDeep serverless developer studio provides a much more convenient interface to manage Lambda LOG_FILTER settings for a single Lambda or for multiple Lambdas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.sensedeep.com%2F%2Fimages%2Fsensedeep%2Flambda-edit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.sensedeep.com%2F%2Fimages%2Fsensedeep%2Flambda-edit.png" alt="Lambda Edit" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Base Logging
&lt;/h2&gt;

&lt;p&gt;Your serverless apps should emit a good base of request state for all requests.&lt;/p&gt;

&lt;p&gt;This should include the request parameters, request body and other high priority state information. The volume of log data should be low enough so that the cost of log ingestion and storage is not a significant burden.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Logging
&lt;/h2&gt;

&lt;p&gt;For critical code modules or less mature code, we include additional logging on custom channels that correspond to each module or feature. This can then be enabled via &lt;code&gt;LOG_FILTER&lt;/code&gt; or &lt;code&gt;LOG_OVERRIDE&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LOG_FILTER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;fatal,error,info,metrics,warn,feature-1

or
&lt;span class="nv"&gt;LOG_OVERRIDE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1626409530045:feature-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Sampled Logging
&lt;/h2&gt;

&lt;p&gt;For a small percentage of requests, we log the full request state so that we always have some record of all the state values.&lt;br&gt;
We use a custom SenseLogs channel called &lt;code&gt;complete&lt;/code&gt; and we sample those requests at a low percentage rate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;log.emit('complete', { /* additional state objects */ })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LOG_SAMPLE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1%:complete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Correlation IDs
&lt;/h2&gt;

&lt;p&gt;Serverless apps often have multiple Lambda functions or services that cooperate to respond to a single client request. Requests that originate with a single client request may traverse through many services and these services may be in different AWS regions or accounts.&lt;/p&gt;

&lt;p&gt;A single request will often traverse through multiple AWS services such as: API Gateway, one more more Lambda functions, Kinesis streams, SQS queues, SNS messages, EventBridge events and other AWS services. The request may fan out to multiple Lambdas and results may be combined back into a single response.&lt;/p&gt;

&lt;p&gt;To trace and debug an individual requests, we add a request ID to all our log events. This way we can filter and view a complete client request that flows though multiple Lambda services.&lt;/p&gt;

&lt;p&gt;We add the trace IDs by using the SenseLogs API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addTraceIds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SenseLogs will extract the API Gateway request ID, Lambda request ID and X-Ray trace ID. SenseLogs will map these IDs to x-correlation-NAME SenseLogs context variables suitable for logging. The following variables are automatically detected and mapped:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;x-correlation-api — API Gateway requestId&lt;/li&gt;
&lt;li&gt;x-correlation-lambda — Lambda requestId&lt;/li&gt;
&lt;li&gt;x-correlation-trace — X-Ray X-Amzn-Trace-Id header&lt;/li&gt;
&lt;li&gt;x-correlation-extended — AWS extended request ID&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SenseLogs will define a special variable 'x-correlation-id' that can be used as a stable request ID. It will be initialized to the value of the X-Correlation-ID header and if not defined, SenseLogs will use (in order) the API Gateway request or X-Ray trace ID.&lt;/p&gt;

&lt;p&gt;SenseLogs also supports adding context state to the logger so you don't have to specify it on each logging API call. Thereafter, each log call will add the additional context to the logged event output.&lt;/p&gt;

&lt;p&gt;Finally we define a client ID that is passed from the end user so we can track a "user request" through all AWS services that are invoked.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;log.addContext&lt;span class="o"&gt;({&lt;/span&gt;&lt;span class="s1"&gt;'x-correlation-client'&lt;/span&gt;: body?.options.clientId&lt;span class="o"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Log Viewing and Querying
&lt;/h2&gt;

&lt;p&gt;Once you have added detailed logging with correlation IDs to your Lambdas, you need an effective log viewer to display and query log events.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;CloudWatch just won't cut it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For our logging needs, we need a log viewer that can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Aggregate not just log streams, but log groups into a unified set of events.&lt;/li&gt;
&lt;li&gt;Aggregate log groups from different AWS accounts and regions.&lt;/li&gt;
&lt;li&gt;Display log events in real-time with minimal latency.&lt;/li&gt;
&lt;li&gt;Instantly query log events to isolate an individual request using correlation request IDs.&lt;/li&gt;
&lt;li&gt;Manage dynamic log control to focus and scale up or down log data as required.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our product, SenseDeep has such a log viewer that we use to monitor and debug the SenseDeep backend service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.sensedeep.com%2F%2Fimages%2Fsensedeep%2Fviewer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.sensedeep.com%2F%2Fimages%2Fsensedeep%2Fviewer.png" alt="SenseDeep Viewer" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Automated Alarms
&lt;/h2&gt;

&lt;p&gt;Manually monitoring logs for app errors and signs of trouble is not scalable or reliable. An automated alarm mechanism when paired with detailed logging is the ideal platform to provide 24x7 oversight of your serverless apps.&lt;/p&gt;

&lt;p&gt;We use such an automated approach. We configure multiple SenseDeep alarms to search for specific flags in log data that indicate errors or potential issues. If an alarm is triggered, an alert will immediately notify the relevant developer with the full context of the app event.&lt;/p&gt;

&lt;p&gt;We instrument our code to not only capture and log errors, but also with "asserts" that test expected conditions and will emit log data that will trip alarms if unexpected conditions arise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Triggering Alarms
&lt;/h2&gt;

&lt;p&gt;To reliably trigger alarms, it is necessary to have a unique property value that the alarm mechanism can detect.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/sensedeep/senselogs" rel="noopener noreferrer"&gt;SenseLogs&lt;/a&gt; logging library supports a flagging mechanism where log events to specific log channels can add a unique matchable "flag" to the log data. Alarms can then reliably test for the presence of such a flag in log data.&lt;/p&gt;

&lt;p&gt;We use SenseLogs to add property flags for errors and asserts. By default, SenseLogs will add a &lt;code&gt;FLAG_ERROR&lt;/code&gt; property for log events &lt;code&gt;error()&lt;/code&gt; or &lt;code&gt;fatal()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For other log channels, the flag option can be set to an map of channels, and the nominated channels will be flagged with the associated value string. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SenseLogs&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;flag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;FLAG_WARN&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;FLAG_ERROR&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;custom&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;FLAG_CUSTOM&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Storm front coming&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will emit:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Storm front coming&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;FLAG_WARN&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then create a SenseDeep alarm to match &lt;code&gt;FLAG_ERROR&lt;/code&gt; and other &lt;code&gt;FLAG_*&lt;/code&gt; patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  SenseDeep Alarms
&lt;/h2&gt;

&lt;p&gt;We configure alarms for generic errors and exceptions as well as specific asserts and state conditions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;CloudWatch Alarms do not have the ability to efficiently monitor log data for specific log data patterns.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Fortunately, SenseDeep was built for this purpose.&lt;/p&gt;

&lt;p&gt;SenseDeep developers create alarms to detect flagged errors when errors and unexpected conditions are encountered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm17xfp8ajyrtjzhiie4d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm17xfp8ajyrtjzhiie4d.png" alt="Alert Match" width="800" height="762"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As SenseDeep ingests log data, it runs configured alarms to match app errors in real-time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alert Notifications
&lt;/h2&gt;

&lt;p&gt;When an alarm triggers it generates an alert to notify the developer via email, SMS or other notification means.&lt;/p&gt;

&lt;p&gt;The alert contains the full details of the flagged condition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36i2cyttv78c6uj6mvrc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F36i2cyttv78c6uj6mvrc.png" alt="Alert Email" width="800" height="826"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on the notification link will launch SenseDeep and display full details of the alert that triggered the alarm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8l4k25kb4v3qp31gozq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8l4k25kb4v3qp31gozq.png" alt="Alert View" width="800" height="728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To see the log entries for the invocation, click on &lt;code&gt;Goto Invocation&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9vu4upn4ybz6e14yze1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9vu4upn4ybz6e14yze1.png" alt="Lambda Alert" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to see the preceding or subsequent log entries, click &lt;code&gt;All Logs&lt;/code&gt; which will launch the log viewer homed to the invocation log data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Goals
&lt;/h2&gt;

&lt;p&gt;Using this serverless debugging pattern, we can detect problems early, typically before users are even aware of an issue.&lt;/p&gt;

&lt;p&gt;The automated monitoring offloads the burden of "baby-sitting" a service and gives developers tools needed to ensure their apps are performing correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudWatch Insights and X-Ray
&lt;/h2&gt;

&lt;p&gt;You may wonder if you can use the native AWS services: CloudWatch and Xray to implement this serverless debugging pattern.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Short answer, you cannot.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;CloudWatch does not aggregate log streams or groups. It is slow and only offers primitive query capabilities.&lt;/p&gt;

&lt;p&gt;CloudWatch Insights does have the ability to search for correlation IDs in logs in a single region. But it is very, very slow and you cannot then see the request in context with the requests before or after the event. CloudWatch Insights is most useful for custom one-off queries, but not for repeated debugging efforts.&lt;/p&gt;

&lt;p&gt;CloudWatch alarms cannot trigger alerts based on log data patterns. You can pair it with CloudWatch Insights, but this is a slow one-off solution and does not scale to process all log data for matching events.&lt;/p&gt;

&lt;p&gt;Similarly, X-Ray has many good use cases, but because it only samples 1 request per second and 5% of additional requests, it cannot be used to consistently debug serverless applications. It is great for observing complete outages when an entire service or component is offline, but not for failing individual requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/sensedeep/senselogs" rel="noopener noreferrer"&gt;SenseLogs Logging Library&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/posts/senselogs/serverless-logging.html" rel="noopener noreferrer"&gt;Fast Logging with SenseLogs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/posts/stories/dynamic-serverless-log-control.html" rel="noopener noreferrer"&gt;Dynamic Log Control for Serverless&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>serverless</category>
      <category>logging</category>
      <category>lambda</category>
      <category>aws</category>
    </item>
    <item>
      <title>SenseDeep DynamoDB Data Browser</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Wed, 03 Nov 2021 00:30:18 +0000</pubDate>
      <link>https://dev.to/embedthis/sensedeep-dynamodb-data-browser-20gk</link>
      <guid>https://dev.to/embedthis/sensedeep-dynamodb-data-browser-20gk</guid>
      <description>&lt;p&gt;The SenseDeep Data browser is a DynamoDB data browser and editor that can be used to query, manage and modify your DynamoDB data.&lt;/p&gt;

&lt;p&gt;Developers are adopting DynamoDB single-table design patterns as the preferred design model where all application data is packed into a single-table. Combing disparate items with different attributes into one table can make browsing, navigating, organizing and viewing data obscure and difficult.&lt;/p&gt;

&lt;p&gt;Managing single-table data and performance can often feel like you are peering at Assembly Language as it is hard to decode overridden keys and attributes manually. A new generation of tools is required.&lt;/p&gt;

&lt;p&gt;SenseDeep can understand your single-table designs and make sense of your data and present items as intuitive application entities instead of raw data. The SenseDeep data browser is single-table "aware". This means SenseDeep can transform raw data items to present as application entities and fields.&lt;/p&gt;

&lt;h2&gt;
  
  
  Single-Table Data Browser
&lt;/h2&gt;

&lt;p&gt;The SenseDeep data browser can query, manage and modify your table data. It supports browsing by scan, query or by single-table entities.&lt;/p&gt;

&lt;p&gt;While the data browser can be used to browse and manage any DynamoDB table, it is turbo-charged when a schema describing your data is applied.&lt;/p&gt;

&lt;p&gt;A scheme describes your application entities and their attributes. It specifies exactly how your data should be interpreted and what data is valid to store in the database table. You can import a schema from a JSON file (such as a OneTable schema) or you can define a schema using the SenseDeep single-table designer.&lt;/p&gt;

&lt;p&gt;When armed with a schema, SenseDeep is able to "understand" your data and organize and present data items as application entities instead of raw, encoded DynamoDB items. The SenseDeep data browser intelligently displays and formats data according to your single-table schema. This not only makes your encoded single-table data easier to understand, but it guides and validates your changes and prevents schema-breaking and application breaking data changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc45ax37wqmxichpw1jyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc45ax37wqmxichpw1jyq.png" alt="Table Browser" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The data browser supports DynamoDB scans, native queries or queries by application entity. Queries by application entity are normal DynamoDB queries but where SenseDeep understands which attributes are required to select and filter your data items.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scanning
&lt;/h3&gt;

&lt;p&gt;When scanning, select the index to scan and provide optional additional filtering attributes. Attributes can be combined using &lt;code&gt;AND&lt;/code&gt; or &lt;code&gt;OR&lt;/code&gt; operators. SenseDeep translates these instructions into DynamoDB scan commands. You can click on the &lt;code&gt;Command&lt;/code&gt; button to see the generated DynamoDB command.&lt;/p&gt;

&lt;h3&gt;
  
  
  Querying
&lt;/h3&gt;

&lt;p&gt;When querying, specify the index and partition key with optional sort key value. Additional filters may also be specified.&lt;/p&gt;

&lt;p&gt;You can use sort key operations to select a single matching item with "Equal" or one or more items with the other Sort Key Operators from the pull down menu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb2h10zil6fvdxxdiwqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb2h10zil6fvdxxdiwqy.png" alt="Table Query by Query" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Query By Entity
&lt;/h3&gt;

&lt;p&gt;When querying by Entity, you select the desired entity model and SenseDeep will select the appropriate attribute filters for that model.&lt;/p&gt;

&lt;p&gt;SenseDeep understands the schema for the selected model and what are the required attributes to retrieve specific entity items. Sensedeep will guide you with the list of attributes for that entity model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6q9m3njnyk4zi4fomq4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6q9m3njnyk4zi4fomq4j.png" alt="Table Query by Entity" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Saving Queries
&lt;/h2&gt;

&lt;p&gt;Queries may be saved to your database table where they are persisted in the schema.  The schema, saved in the database table, contains the entity definitions, saved queries and modeling data. This makes your table self-describing for 3rd party tools.&lt;/p&gt;

&lt;p&gt;You can load and delete queries using the &lt;code&gt;Queries&lt;/code&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10uiju9exmry7p9emq2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10uiju9exmry7p9emq2o.png" alt="Table Query" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Visualization
&lt;/h2&gt;

&lt;p&gt;SenseDeep groups, organizes and color-codes query results for maximum clarity.&lt;/p&gt;

&lt;p&gt;Cells are color coded:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A cell is green if it can be edited. Click on the cell to modify the value (inline).&lt;/li&gt;
&lt;li&gt;A cell is pink if the cells value is derived from a computed value template using other attributes as ingredients.&lt;/li&gt;
&lt;li&gt;A cell has a wavy gray background if it is not relevant for this item (as defined by the schema).&lt;/li&gt;
&lt;li&gt;A cell has a blue underline hot-link if it is a reference to another item and can be clicked to jump to the referenced item.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrf7bdf5z1t8k08a3wi0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrf7bdf5z1t8k08a3wi0.png" alt="Table Cells" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Columns are ordered with the index partition and sort key first, followed by the schema entity type attribute (if defined). After that, columns for all items are ordered alphabetically. If an item has many attributes, click the edit icon to display the item attributes vertically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hot Linked Items
&lt;/h3&gt;

&lt;p&gt;Databases like DynamoDB have relationships between items. Just because it is a NoSQL database, does not mean there are no relationships. It just means there is little support in terms of joining tables, enforced foreign keys and data relationship integrity.&lt;/p&gt;

&lt;p&gt;SenseDeep and &lt;a href="https://www.npmjs.com/package/dynamodb-onetable" rel="noopener noreferrer"&gt;OneTable&lt;/a&gt; bridge this gap by defining relationships between entities in the schema. SenseDeep will interpret these relationship links and highlight those in the browser with blue underlining.&lt;/p&gt;

&lt;p&gt;You can quickly traverse related items by clicking on a blue hot link in the query results.&lt;/p&gt;

&lt;p&gt;SenseDeep defines relationships between entity items using the schema "Reference" field property. This property specifies that a field refers to another entity item and which attributes are required to uniquely identify the target item.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztt0290f8cyvkyztebsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztt0290f8cyvkyztebsg.png" alt="Table Design Hot Link" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Hot Link References
&lt;/h3&gt;

&lt;p&gt;The format of the "Reference" field is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Model:index:attribute-1&lt;span class="o"&gt;=&lt;/span&gt;source-attribute-1,...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, the following Reference defines a link to an Account item:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Account:primary:id&lt;span class="o"&gt;=&lt;/span&gt;accountId
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means: select the Account entity using the "primary" index and the "Account.id" attribute using the value of "accountId" from this item.&lt;/p&gt;

&lt;p&gt;If you control-click (or Cmd-Click) on a hot-link SenseDeep uses the reference and determines the relevant query to locate the item and automatically fills the index, key and filter attributes accordingly. Queries are entered into your browser history so you can click the browser &lt;code&gt;Back&lt;/code&gt; button to easily jump backwards to the original item.&lt;/p&gt;

&lt;p&gt;You can also save these hot-linked queries using &lt;code&gt;Queries -&amp;gt; Save&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Show Templates
&lt;/h3&gt;

&lt;p&gt;An essential single-table design technique is decouple your keys from regular data attributes. This greatly enhances your ability to evolve your DynamoDB data going forward. SenseDeep and OneTable support this technique via &lt;code&gt;value templates&lt;/code&gt; where key values are composed using templates that combine the values of other attributes at runtime.&lt;/p&gt;

&lt;p&gt;It is often useful to view the value templates vs the calculated key values. The &lt;code&gt;Templates&lt;/code&gt; toggle above the table switches between displaying the template values vs the actual data values.&lt;/p&gt;

&lt;p&gt;Similarly, there may be some attributes that are designated as &lt;code&gt;hidden&lt;/code&gt;. Changing the hidden toggle will display or hide these attributes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Menu
&lt;/h3&gt;

&lt;p&gt;In any data cell, you can right click to display the context menu for additional command options.&lt;/p&gt;

&lt;p&gt;The options are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Copy&lt;/code&gt; to copy the current item to the clipboard in JSON format.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Design Schema&lt;/code&gt; will jump to the SenseDeep single-table designer for the entity on which the current item is based.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Edit&lt;/code&gt; will open the editor slide-out panel for easy editing of the whole item.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Follow Reference&lt;/code&gt; behaves the same as Cmd-Click to follow a hot link reference.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Generate Mock Data&lt;/code&gt; option can be used when developing to generate sample data, such as email addresses or phone numbers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6h8kzxdgfzfk3d8s388.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6h8kzxdgfzfk3d8s388.png" alt="Table Context" width="800" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Modifying Data
&lt;/h3&gt;

&lt;p&gt;You can modify data inline by clicking on any &lt;code&gt;green&lt;/code&gt; cell. Once modified, the &lt;code&gt;Save&lt;/code&gt; button will be displayed above the items to persist the changes to the table.&lt;/p&gt;

&lt;p&gt;To add a new item, click the &lt;code&gt;Add Item&lt;/code&gt; button. You can select the desired entity model (if a schema is present) and it will intelligently prompt you for the appropriate attributes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0oo8xfrj77zekhnf37f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0oo8xfrj77zekhnf37f.png" alt="Table Inline Edit" width="636" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Modify Panel
&lt;/h2&gt;

&lt;p&gt;You can also edit by clicking on the &lt;code&gt;Edit&lt;/code&gt; pencil icon at the start of each item. This will display a slide out editor panel that will display only the attributes of the item, organized vertically.&lt;/p&gt;

&lt;p&gt;The red cells have their value derived from other attributes and cannot be edited. You can modify the value template in the schema via the SenseDeep single-table designer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forbhmxdo645comy0by1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forbhmxdo645comy0by1x.png" alt="Table Edit" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you change the partition key values either directly or indirectly be changing attributes that are used in a key value template, SenseDeep will atomically remove the old item and create the new item via a DynamoDB transaction.&lt;/p&gt;

&lt;p&gt;When you click save, the changes are accepted, but you still must click the Save button on the query page to persist results to the table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Import / Export
&lt;/h2&gt;

&lt;p&gt;No man is an island and your data must be easy to export or import.  SenseDeep provides an "Export to AWS Workbench" option. This will export your schema and data items into a WorkBench model. This model can also be imported by the Dynobase app.&lt;/p&gt;

&lt;p&gt;When exporting, you should limit the amount of data you export. WorkBench models are for development and are designed for limited data sets. You can also export in a JSON backup format.&lt;/p&gt;

&lt;p&gt;When importing, you can import a model to an existing (empty) table, or you can dynamically create a new table to hold the imported model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c6od2fh0ma1hxbk0jay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c6od2fh0ma1hxbk0jay.png" alt="Table Import" width="574" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Gaining insight into single-table design patterns is the new frontier for DynamoDB and the SenseDeep DynamoDB Studio is the start of a new wave of tools to elevate and transform DynamoDB development.&lt;/p&gt;

&lt;p&gt;Previously, single-table design with DynamoDB was a black box and it was difficult to peer inside and see how the components of your apps are operating and interacting. Now, SenseDeep can understand your data schema and can transform raw DynamoDB data to highlight your application entities and relationships and transform your effectiveness with DynamoDB.&lt;/p&gt;

&lt;p&gt;SenseDeep includes a &lt;em&gt;table manager&lt;/em&gt;, &lt;em&gt;data item browser&lt;/em&gt;, &lt;em&gt;single-table designer&lt;/em&gt;, &lt;em&gt;provisioning planner&lt;/em&gt;, &lt;em&gt;database migration manager&lt;/em&gt; and in-depth table &lt;em&gt;metrics&lt;/em&gt; — all of which are single-table aware.&lt;/p&gt;

&lt;h2&gt;
  
  
  More?
&lt;/h2&gt;

&lt;p&gt;Try the SenseDeep DynamoDB studio with a free developer license at &lt;a href="https://app.sensedeep.com" rel="noopener noreferrer"&gt;SenseDeep App&lt;/a&gt; or learn more at &lt;a href="https://www.sensedeep.com" rel="noopener noreferrer"&gt;https://www.sensedeep.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You may also like to read:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/series/dynamodb-studio/data-browser.html" rel="noopener noreferrer"&gt;SenseDeep DynamoDB Data Browser&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/series/dynamodb-studio/single-table-designer.html" rel="noopener noreferrer"&gt;SenseDeep Single Table Designer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/series/dynamodb-studio/migration-manager.html" rel="noopener noreferrer"&gt;SenseDeep Migration Manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/series/dynamodb-studio/provisioning-planner.html" rel="noopener noreferrer"&gt;SenseDeep Provisioning Planner&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/series/dynamodb-studio/metrics.html" rel="noopener noreferrer"&gt;SenseDeep DynamoDB Metrics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/stories/dynamodb-studio.html" rel="noopener noreferrer"&gt;SenseDeep DynamoDB Studio&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/posts/2020/dynamodb-onetable.html" rel="noopener noreferrer"&gt;DynamoDB OneTable&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  About SenseDeep
&lt;/h2&gt;

&lt;p&gt;SenseDeep is an observability platform for AWS developers to accelerate the delivery and maintenance of serverless applications.&lt;/p&gt;

&lt;p&gt;SenseDeep helps developers through the entire lifecycle to create observable, reliable and maintainable apps via an integrated serverless developer studio that includes deep insights into how your apps are performing.&lt;/p&gt;

&lt;p&gt;To try SenseDeep, navigate your browser to: &lt;a href="https://app.sensedeep.com/" rel="noopener noreferrer"&gt;https://app.sensedeep.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To learn more about SenseDeep please see: &lt;a href="https://www.sensedeep.com/product/" rel="noopener noreferrer"&gt;https://www.sensedeep.com/product&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Please let us know what you think, we thrive on feedback. &lt;a href="//mailto:dev@sensedeep.com"&gt;dev@sensedeep.com&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Links
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep Web Site&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://app.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep App&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>dynamodb</category>
      <category>singletable</category>
    </item>
    <item>
      <title>SenseDeep DynamoDB Studio</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Fri, 29 Oct 2021 06:20:18 +0000</pubDate>
      <link>https://dev.to/embedthis/sensedeep-dynamodb-studio-3dl4</link>
      <guid>https://dev.to/embedthis/sensedeep-dynamodb-studio-3dl4</guid>
      <description>&lt;h2&gt;
  
  
  Introducing the SenseDeep DynamoDB Studio.
&lt;/h2&gt;

&lt;p&gt;SenseDeep has a complete DynamoDB developer studio to support your DynamoDB designs and single-table development.&lt;/p&gt;

&lt;p&gt;It includes a &lt;em&gt;table manager&lt;/em&gt;, &lt;em&gt;data item browser&lt;/em&gt;, &lt;em&gt;single-table designer&lt;/em&gt;, &lt;em&gt;provisioning planner&lt;/em&gt;, &lt;em&gt;database migration manager&lt;/em&gt; and in-depth table &lt;em&gt;metrics&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;SenseDeep DynamoDB studio is a comprehensive set of DynamoDB tools that are single-table "aware". This means SenseDeep can understand your single-table designs and application entity data and can guide your design, queries and monitoring based on this deeper understanding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;DynamoDB best practices are evolving quickly as developers are realizing how to exploit the power behind the deceptively simple DynamoDB design. Designs with single-table design patterns, key overriding and composition, sparse indexes, query optimization and more powerful single-table access libraries such as &lt;a href="https://www.npmjs.com/package/dynamodb-onetable" rel="noopener noreferrer"&gt;OneTable&lt;/a&gt; are becoming common place.&lt;/p&gt;

&lt;p&gt;However, managing single-table data and performance can often feel like you are peering at Assembly Language. Packing disparate data items into a single-table can make navigating, organizing and viewing data difficult. Furthermore, single-table design techniques such as prefixed and mapped attribute names exacerbate this problem and can make interpreting keys tough.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;New tools are needed that &lt;strong&gt;understand&lt;/strong&gt; the single-table schema and its relationships. These tools should support schema creating and be able to present and organize your data logically according to your application entities.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Data Browser
&lt;/h2&gt;

&lt;p&gt;The SenseDeep data browser can query, manage and modify your table data. It supports browsing by scan, query or by single-table entities.&lt;/p&gt;

&lt;p&gt;While the data browser can be used to browse and manage any DynamoDB table, it is turbo-charged when a schema describing the data is imported (such as from a OneTable schema) or defined using the SenseDeep single-table designer. When using a data schema, SenseDeep is able to "understand" your data and organize and present data items as application entities instead of raw, encoded DynamoDB items.&lt;/p&gt;

&lt;p&gt;The SenseDeep data browser intelligently displays and formats data according to your single-table schema. This not only makes your encoded single-table data easier to understand, but it guides and validates your changes and prevents schema-breaking and application breaking data changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc45ax37wqmxichpw1jyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc45ax37wqmxichpw1jyq.png" alt="Table Browse" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Queries
&lt;/h3&gt;

&lt;p&gt;You can perform DynamoDB scans, native queries or queries by application entity.&lt;/p&gt;

&lt;p&gt;When scanning, you select the index to scan and provide optional additional filtering attributes. Attributes can be combined in an &lt;code&gt;AND&lt;/code&gt; or &lt;code&gt;OR&lt;/code&gt; expression.&lt;/p&gt;

&lt;p&gt;When querying, you specify the index and partition key with optional sort key value. Additional filters may also be specified.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10uiju9exmry7p9emq2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10uiju9exmry7p9emq2o.png" alt="Table Query" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When querying by Entity, you specify the index and application entity model name that is defined in the schema. SenseDeep then intelligently prompts you for the required attributes that comprise the keys to retrieve the data items.&lt;/p&gt;

&lt;h3&gt;
  
  
  Saving Queries
&lt;/h3&gt;

&lt;p&gt;Queries may be saved to your database table where they are persisted in the schema. You can load and delete queries using the &lt;code&gt;Queries&lt;/code&gt; button.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Visualization
&lt;/h3&gt;

&lt;p&gt;SenseDeep groups, organizes and color-codes query results for maximum clarity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrf7bdf5z1t8k08a3wi0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrf7bdf5z1t8k08a3wi0.png" alt="Table Cells" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Columns are ordered with the index partition and sort key first, followed by the schema entity type attribute (if defined). After that, columns for all items are ordered alphabetically. If an item has many attributes, click the edit icon to display the item attributes vertically.&lt;/p&gt;

&lt;p&gt;Cells are color coded:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A cell is green if it can be edited. Click on the cell to modify the value.&lt;/li&gt;
&lt;li&gt;A cell is pink if it has its value derived from a schema value template.&lt;/li&gt;
&lt;li&gt;A cell has a wavy gray background if it is not relevant for this item (as defined by the schema).&lt;/li&gt;
&lt;li&gt;A cell has a blue underline hot-link if it is a reference to another item and can be click to jump to the referenced item.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Hot Linked Items
&lt;/h4&gt;

&lt;p&gt;If you control-click (or Cmd-Click) on a hot-link, you can quickly traverse related items. SenseDeep determines the relevant query to locate the item and automatically fills the index, key and filter attributes accordingly. You can save these queries using &lt;code&gt;Queries -&amp;gt; Save&lt;/code&gt;. Queries are entered into your browser history so you can click the browser &lt;code&gt;Back&lt;/code&gt; button to easily jump backwards to the original query.&lt;/p&gt;

&lt;h4&gt;
  
  
  Show Templates
&lt;/h4&gt;

&lt;p&gt;A useful single-table design technique is to compose key values using templates that combine the values of other attributes at runtime to calculate the key values. It is often useful to view the value templates vs the calculated key values. The &lt;code&gt;Templates&lt;/code&gt; toggle above the table switches between displaying the template values vs the actual data values.&lt;/p&gt;

&lt;p&gt;Similarly, there may be some attributes that are designated as &lt;code&gt;hidden&lt;/code&gt;. Changing the hidden toggle will display or hide these attributes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Context Menu
&lt;/h4&gt;

&lt;p&gt;On data cells, you can right click to display the context menu. The options are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Copy&lt;/code&gt; to copy the current item to the clipboard in JSON format.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Design Schema&lt;/code&gt; will jump to the SenseDeep single-table designer for the entity on which the current item is based.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Edit&lt;/code&gt; will open the editor slide-out panel for easy editing of the whole item.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Follow Reference&lt;/code&gt; behaves the same as Cmd-Click to follow a hot link reference.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Generate Mock Data&lt;/code&gt; option can be used when developing to generate sample data, such as email addresses or phone numbers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6h8kzxdgfzfk3d8s388.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6h8kzxdgfzfk3d8s388.png" alt="Table Context" width="800" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Modifying Data
&lt;/h3&gt;

&lt;p&gt;You can modify data inline by clicking on any &lt;code&gt;green&lt;/code&gt; cell. Once modified, the &lt;code&gt;Save&lt;/code&gt; button will be displayed above the items to persist the changes to the table.&lt;/p&gt;

&lt;p&gt;To add a new item, click the &lt;code&gt;Add Item&lt;/code&gt; button. You can select the desired entity model (if a schema is present) and it will intelligently prompt you for the appropriate attributes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0oo8xfrj77zekhnf37f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn0oo8xfrj77zekhnf37f.png" alt="Table Inline" width="636" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also edit by clicking on the &lt;code&gt;Edit&lt;/code&gt; pencil icon at the start of each item. This will display a slide out editor panel that will display only the attributes of the item, organized vertically. Click on a green cell to modify the contents. The red cells have their value derived from other attributes and cannot be edited. You can modify the value template in the schema via the SenseDeep single-table designer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forbhmxdo645comy0by1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Forbhmxdo645comy0by1x.png" alt="Table Edit" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you change the partition key values either directly or indirectly be changing attributes that are used in a key value template, SenseDeep will atomically remove the old item and create the new item via a DynamoDB transaction.&lt;/p&gt;

&lt;p&gt;When you click save, the changes are accepted, but you still must click the Save button on the query page to persist results to the table.&lt;/p&gt;

&lt;h3&gt;
  
  
  Import / Export
&lt;/h3&gt;

&lt;p&gt;No man is an island and your data must be easy to export or import.  SenseDeep provides and Export to AWS Workbench option. This will export your schema and data items into a WorkBench model. This model can also be imported by the Dynobase tool.&lt;/p&gt;

&lt;p&gt;When exporting, you should limit the amount of data you export. WorkBench models are for development and are designed for limited data sets. You can also export in a JSON backup format.&lt;/p&gt;

&lt;p&gt;When importing, you can import a model to an existing (empty) table, or you can dynamically create a new table to hold the imported model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c6od2fh0ma1hxbk0jay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0c6od2fh0ma1hxbk0jay.png" alt="Table Import" width="574" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Single Table Designer
&lt;/h2&gt;

&lt;p&gt;The SenseDeep Single Table Designer provides an easy, intuitive interface to create, manage and modify your single-table schemas.&lt;/p&gt;

&lt;p&gt;Single-table schemas control how your application data are stored in DynamoDB tables. Schemas define your application entities, table indexes and key parameters. Via schemas, complex mapped table items can be more clearly and reliably accessed, modified and presented.&lt;/p&gt;

&lt;p&gt;The designer stores your single-table schemas in your table. In this manner, your table is self-describing as to how table data should be interpreted. You can export schemas and generate JSON, JavaScript or TypeScript data/code to import into your apps.&lt;/p&gt;

&lt;p&gt;The SenseDeep single-table designer works best with the &lt;a href="https://www.npmjs.com/package/dynamodb-onetable" rel="noopener noreferrer"&gt;OneTable&lt;/a&gt; library, but should work with any consistent single-table design.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kj0vvg71hyu3b1hyxck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kj0vvg71hyu3b1hyxck.png" alt="Table Design" width="800" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Naming and Versioning
&lt;/h2&gt;

&lt;p&gt;Schemas are named and versioned. The default schema is called the &lt;code&gt;Current&lt;/code&gt; schema. Other schemas can have any name of your choosing. For example "Prototype".&lt;/p&gt;

&lt;p&gt;Schemas have version numbers so that data migrations can utilize the correct versioned schema when upgrading or downgrading data items in your table. The migration manager will select and apply the correct versioned schema when data migrations are run.&lt;/p&gt;

&lt;p&gt;Version numbers use &lt;a href="https://semver.org/" rel="noopener noreferrer"&gt;Semantic Versioning&lt;/a&gt; to indicate and control data compatibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Entity Models
&lt;/h2&gt;

&lt;p&gt;An entity model defines the valid set of attributes for an application entity. Each attribute has a defined name, type and a set of properties that may be utilized by your DynamoDB access library such as &lt;a href="https://www.npmjs.com/package/dynamodb-onetable" rel="noopener noreferrer"&gt;OneTable&lt;/a&gt; or an ORM of your choosing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw7dq11jchu0lh5ygcxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgw7dq11jchu0lh5ygcxo.png" alt="Table Fields" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the list of entity models, you can click on an entity to modify, click &lt;code&gt;Add Model&lt;/code&gt; to add a new entity model.&lt;/p&gt;

&lt;p&gt;Via the &lt;code&gt;Edit Schema&lt;/code&gt; button you can modify the schema name or version. If the schema is a non-current schema, you can also click &lt;code&gt;Apply to Current&lt;/code&gt; to apply the contents of the displayed schema to the saved &lt;code&gt;Current&lt;/code&gt; schema in the table. This overwrites the previous &lt;code&gt;Current&lt;/code&gt; schema.&lt;/p&gt;

&lt;h2&gt;
  
  
  Entity Fields
&lt;/h2&gt;

&lt;p&gt;Clicking on an entity attribute will display a slide out panel to edit the properties of that attribute.&lt;/p&gt;

&lt;p&gt;You can modify the type and any other properties of the type. These properties are defined in the schema and may (or may not) be implemented by your DynamoDB access library. All these are implemented by OneTable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf56lseriqhamjd6dami.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf56lseriqhamjd6dami.png" alt="Table Field Edit" width="676" height="788"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The properties include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name — the name of the attribute.&lt;/li&gt;
&lt;li&gt;Default Value — a default value to set if not defined when created.&lt;/li&gt;
&lt;li&gt;Enumerated values — a list of possible values for the attribute.&lt;/li&gt;
&lt;li&gt;Encrypt — hint the attribute should have an extra layer of encryption.&lt;/li&gt;
&lt;li&gt;Filter — indicates the attribute can be used in filter expressions.&lt;/li&gt;
&lt;li&gt;Hidden — the attribute should be hidden in query results by default.&lt;/li&gt;
&lt;li&gt;Mapped Name — a shorter real attribute name when storing in the table.&lt;/li&gt;
&lt;li&gt;Nulls — indicates that nulls should be stored in the table vs being removed.&lt;/li&gt;
&lt;li&gt;Required — the attribute is required to be present when created.&lt;/li&gt;
&lt;li&gt;Data Type — the attribute data type.&lt;/li&gt;
&lt;li&gt;Validation Expression — a regular expression to validate all writes to the table.&lt;/li&gt;
&lt;li&gt;Value template — a JavaScript value template to compose the attribute based on other attribute values.&lt;/li&gt;
&lt;li&gt;Unique — indicates the attribute must always hold a unique value over all such items.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Export
&lt;/h2&gt;

&lt;p&gt;You can also export your schema in JavaScript, TypeScript or JSON formats via the &lt;code&gt;Export Schema&lt;/code&gt; button from the Models list. You can utilize the exported file directly in your &lt;a href="https://www.npmjs.com/package/dynamodb-onetable" rel="noopener noreferrer"&gt;OneTable&lt;/a&gt; apps to control your database interactions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning Planner
&lt;/h2&gt;

&lt;p&gt;The DynamoDB Provisioning page displays your current and projected provisioning, utilization and costs.&lt;/p&gt;

&lt;p&gt;While there are various DynamoDB calculators, they are "theoretical" and not based on actual data usage. The SenseDeep provisioning planner displays displays the actual costs of your current billing plan based on real, live data. It then compares this usage with the the cost of an alternate plan if you were to switch your billing plan from OnDemand to Provisioned or vice-versa.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraqdcubqztyuch80acvt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fraqdcubqztyuch80acvt.png" alt="Table Provisioning" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Database Migration Manager
&lt;/h2&gt;

&lt;p&gt;The SenseDeep database migration manager provide a controlled way to upgrade or downgrade your table data.&lt;/p&gt;

&lt;p&gt;It displays what migrations have been applied to your data and what migrations are outstanding. It shows what is the migration version for your current data and schema. You can upgrade by running outstanding migrations or downgrade by reversing migrations.&lt;/p&gt;

&lt;p&gt;SenseDeep can orchestrate DynamoDB migrations controlled by the &lt;a href="https://www.npmjs.com/package/onetable-migrate" rel="noopener noreferrer"&gt;OneTable Migration&lt;/a&gt; library. You can use this library even if you are not using OneTable for your apps. Check out the OneTable Controller sample &lt;a href="https://github.com/sensedeep/onetable-controller" rel="noopener noreferrer"&gt;OneTable Controller&lt;/a&gt; in GitHub and deploy to host and manage your data migrations.&lt;/p&gt;

&lt;p&gt;Read &lt;a href="https://www.sensedeep.com/blog/stories/onetable-controller.html" rel="noopener noreferrer"&gt; Configuring OneTable Migrate&lt;/a&gt; for more information and setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu81xt3l3yqulfj3hfif1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu81xt3l3yqulfj3hfif1.png" alt="Table Migrate" width="800" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Table Metrics
&lt;/h2&gt;

&lt;p&gt;SenseDeep provides both standard AWS DynamoDB metrics and enhanced single-table metrics.&lt;/p&gt;

&lt;p&gt;The DynamoDB standard metrics displays overview metrics and operation details for a whole DynamoDB table. These metrics come from the standard AWS CloudWatch DynamoDB metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5u4qfsut2plu4p5h9bi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5u4qfsut2plu4p5h9bi.png" alt="Table Standard Metrics" width="800" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DynamoDB Single-table designs are more complex and require per entity/model performance monitoring and metrics. Traditional monitoring covers table level and operation level metrics only. What is missing is the ability to see single-table entities and their performance and load on the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vnrs8b56rf8u1dk3skk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vnrs8b56rf8u1dk3skk.png" alt="Table Single Table Metrics" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SenseDeep provides single-table metrics for apps using &lt;a href="https://www.npmjs.com/package/dynamodb-onetable" rel="noopener noreferrer"&gt;DynamoDB OneTable&lt;/a&gt; or for any JavaScript app that utilizes &lt;a href="https://www.npmjs.com/package/dynamodb-metrics" rel="noopener noreferrer"&gt;DynamoDB Metrics&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table Management
&lt;/h2&gt;

&lt;p&gt;SenseDeep can manage the DynamoDB tables for your enabled clouds. You can create and delete tables and indexes for your tables.&lt;/p&gt;

&lt;p&gt;SenseDeep will automatically discover your tables and will dynamically update this list as new tables are created or destroyed. The table list includes the table size, number of items, billing scheme and provisioned capacity.&lt;/p&gt;

&lt;p&gt;The SenseDeep table management is not meant to replace appropriate "infrastructure-as-code" deployment of tables to production. Rather, it intends to provide a quick and easy way to create and manage tables and indexes while developing your DynamoDB applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbq0nawmi12ypxzhzb4fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbq0nawmi12ypxzhzb4fa.png" alt="Table Management" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Gaining insight into single-table design patterns is the new frontier for DynamoDB and the SenseDeep DynamoDB Studio is the start of a new wave of tools to elevate and transform DynamoDB development.&lt;/p&gt;

&lt;p&gt;Previously, single-table design with DynamoDB was a black box and it was difficult to peer inside and see how the components of your apps are operating and interacting. Now, SenseDeep can understand your data schema and can transform raw DynamoDB data to highlight your application entities and relationships and transform your effectiveness with DynamoDB.&lt;/p&gt;

&lt;p&gt;SenseDeep includes a &lt;em&gt;table manager&lt;/em&gt;, &lt;em&gt;data item browser&lt;/em&gt;, &lt;em&gt;single-table designer&lt;/em&gt;, &lt;em&gt;provisioning planner&lt;/em&gt;, &lt;em&gt;database migration manager&lt;/em&gt; and in-depth table &lt;em&gt;metrics&lt;/em&gt; — all of which are single-table aware.&lt;/p&gt;

&lt;h2&gt;
  
  
  More?
&lt;/h2&gt;

&lt;p&gt;Try the SenseDeep DynamoDB studio with a free developer license at &lt;a href="https://app.sensedeep.com" rel="noopener noreferrer"&gt;SenseDeep App&lt;/a&gt; or learn more at &lt;a href="https://www.sensedeep.com" rel="noopener noreferrer"&gt;https://www.sensedeep.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You may also like to read:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/posts/2021/dynamodb-singletable-design.html" rel="noopener noreferrer"&gt;Data Modeling with single Table Designs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/posts/2020/dynamodb-onetable.html" rel="noopener noreferrer"&gt;DynamoDB OneTable&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/posts/2021/dynamodb-schemas.html" rel="noopener noreferrer"&gt;DynamoDB with OneTable Schemas&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  About SenseDeep
&lt;/h2&gt;

&lt;p&gt;SenseDeep is an observability platform for AWS developers to accelerate the delivery and maintenance of serverless applications.&lt;/p&gt;

&lt;p&gt;SenseDeep helps developers through the entire lifecycle to create observable, reliable and maintainable apps via an integrated serverless developer studio that includes deep insights into how your apps are performing.&lt;/p&gt;

&lt;p&gt;To try SenseDeep, navigate your browser to: &lt;a href="https://app.sensedeep.com/" rel="noopener noreferrer"&gt;https://app.sensedeep.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To learn more about SenseDeep please see: &lt;a href="https://www.sensedeep.com/product/" rel="noopener noreferrer"&gt;https://www.sensedeep.com/product&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Please let us know what you think, we thrive on feedback. &lt;a href="//mailto:dev@sensedeep.com"&gt;dev@sensedeep.com&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Links
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep Web Site&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://app.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep App&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>dynamodb</category>
      <category>observability</category>
    </item>
    <item>
      <title>Understanding your DynamoDB Single Table Performance</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Thu, 16 Sep 2021 07:54:03 +0000</pubDate>
      <link>https://dev.to/embedthis/understanding-your-dynamodb-single-table-performance-58ab</link>
      <guid>https://dev.to/embedthis/understanding-your-dynamodb-single-table-performance-58ab</guid>
      <description>&lt;p&gt;Best practices for DynamoDB have evolved to favor single-table design patterns where one database table serves the entire application and holds multiple different application entities.&lt;/p&gt;

&lt;p&gt;This design pattern offers greater performance by reducing the number of requests required to retrieve information and lowers operational overhead. It also greatly simplifies the changing and evolving of your DynamoDB designs by uncoupling the entity key fields and attributes from the physical table structure.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;However, managing single-table data and performance can often feel like you are peering at &lt;code&gt;Assembly Language&lt;/code&gt;. Composite keys with prefixed and mapped attribute names are single-table design techniques but they can make just reading a single-table item quite difficult.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What is needed are tools that "understand" the single-table schema and can present and organize your data logically according to your application entities.&lt;/p&gt;

&lt;p&gt;To meet this need we've created the &lt;a href="https://www.npmjs.com/package/dynamodb-metrics" rel="noopener noreferrer"&gt;DynamoDB Metrics&lt;/a&gt; library which calculates and emits detailed single-table performance metrics for DynamoDB.&lt;/p&gt;

&lt;p&gt;This post looks at our libraries &lt;a href="https://www.npmjs.com/package/dynamodb-metrics" rel="noopener noreferrer"&gt;DynamoDB Metrics&lt;/a&gt;, &lt;a href="https://www.npmjs.com/package/dynamodb-onetable" rel="noopener noreferrer"&gt;OneTable&lt;/a&gt; and the &lt;a href="https://www.sensedeep.com" rel="noopener noreferrer"&gt;SenseDeep&lt;/a&gt; platform that understand your single-table design schema and can create and present detailed metrics to graphically show how your single-table designs are performing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Single Table Monitoring
&lt;/h2&gt;

&lt;p&gt;So what are the kinds of questions that &lt;a href="https://www.npmjs.com/package/dynamodb-metrics" rel="noopener noreferrer"&gt;DynamoDB Metrics&lt;/a&gt; can answer?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which single-table entity/model is causing the most load and is consuming the most RCU or WCU?&lt;/li&gt;
&lt;li&gt;Which customer tenant is causing the most load and how much should they be billed?&lt;/li&gt;
&lt;li&gt;Which app or function is causing what percentage of load on DynamoDB and is consuming the most RCU or WCU?&lt;/li&gt;
&lt;li&gt;Which queries are the most inefficient (items vs scanned) and by which app or model?&lt;/li&gt;
&lt;li&gt;Which operations are being used the most?&lt;/li&gt;
&lt;li&gt;Which entity is using performing scans or other operations?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These questions and others can be answered by using detailed metrics for DynamoDB that profile performance at an application entity/model level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vnrs8b56rf8u1dk3skk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vnrs8b56rf8u1dk3skk.png" alt="Single Table" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB Metrics Features
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.npmjs.com/package/dynamodb-metrics" rel="noopener noreferrer"&gt;DynamoDB Metrics&lt;/a&gt; library is an NPM module for Node applications that captures and emits detailed DynamoDB metrics. It has the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creates detailed CloudWatch metrics for Tables, Indexes, Apps/Functions, Entities and DynamoDB operations.&lt;/li&gt;
&lt;li&gt;Emits metrics using CloudWatch EMF for zero-latency metric creation.&lt;/li&gt;
&lt;li&gt;Supports AWS V2 and V3 SDKs.&lt;/li&gt;
&lt;li&gt;Simple easy integration.&lt;/li&gt;
&lt;li&gt;Very low CPU and memory impact.&lt;/li&gt;
&lt;li&gt;Clean, readable small code base (&amp;lt;400 lines).&lt;/li&gt;
&lt;li&gt;Full TypeScript support.&lt;/li&gt;
&lt;li&gt;No dependencies.&lt;/li&gt;
&lt;li&gt;Optionally integrates with &lt;a href="https://www.npmjs.com/package/senselogs" rel="noopener noreferrer"&gt;SenseLogs the Serverless Logger&lt;/a&gt; for dynamic control of metrics.&lt;/li&gt;
&lt;li&gt;Supported by the free &lt;a href="https://www.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep Developer Plan&lt;/a&gt; for graphical DynamoDB single-table monitoring.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Metrics Galore
&lt;/h2&gt;

&lt;p&gt;DynamoDB Metrics captures detailed statistics across 5 dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Table — Per table metrics&lt;/li&gt;
&lt;li&gt;Tenant — Per tenant metrics&lt;/li&gt;
&lt;li&gt;Source — Per application, module or function identification&lt;/li&gt;
&lt;li&gt;Index — Primary or global secondary index&lt;/li&gt;
&lt;li&gt;Model — Application single-table entity / model name&lt;/li&gt;
&lt;li&gt;Operation — DynamoDB low-level operation: GetItem, PutItem, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can drill-down to see metrics aggregated by table, tenant, source, index, model or operation. This enables you to pin-point exactly where performance issues may be lurking.&lt;/p&gt;

&lt;p&gt;For each of these dimension combinations, DynamoDB Metrics emits the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;read — Read capacity units consumed&lt;/li&gt;
&lt;li&gt;write — Write capacity units consumed&lt;/li&gt;
&lt;li&gt;latency — Aggregated request latency in milliseconds&lt;/li&gt;
&lt;li&gt;count — Count of items returned&lt;/li&gt;
&lt;li&gt;scanned — Number of items scanned&lt;/li&gt;
&lt;li&gt;requests — Number of API requests issued&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these metrics, you can see precisely who is consuming read and write capacity, which requests are running long, which requests are inefficient and are scanning the table.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to get DynamoDB Metrics
&lt;/h2&gt;

&lt;p&gt;There are two ways to get these wonderful single-table metrics for DynamoDB.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The &lt;a href="https://www.npmjs.com/package/dynamodb-metrics" rel="noopener noreferrer"&gt;DynamoDB Metrics&lt;/a&gt; NPM library can be used by any Node application using DynamoDB. It is configured as AWS SDK middleware and efficiently captures request details with minimal overhead.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Alternatively, you can use the &lt;a href="https://www.npmjs.com/package/dynamodb-onetable" rel="noopener noreferrer"&gt;OneTable&lt;/a&gt; library that has this support built-in and get all the other benefits of OneTable.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  DynamoDB Metrics
&lt;/h2&gt;

&lt;p&gt;To configure DynamoDB Metrics, load the library and pass your DynamoDB client instance to the Metrics constructor. The other parameters tell Metrics how to understand your index and key structure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Metrics&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dynamodb-metrics&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Metrics&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;indexes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;primary&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}},&lt;/span&gt;
    &lt;span class="na"&gt;separator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;#&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can read more about how to configure Metrics at &lt;a href="https://github.com/sensedeep/dynamodb-metrics/blob/main/README.md" rel="noopener noreferrer"&gt;DynamoDB Metrics README&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  OneTable Support
&lt;/h2&gt;

&lt;p&gt;To enable DynamoDB Metrics using OneTable, just add &lt;code&gt;metrics&lt;/code&gt; to your OneTable constructor and specify the name of your application or Lambda function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;acme:launcher&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OneTable uses your defined OneTable schema to understand your key structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to View DynamoDB Single Table Metrics
&lt;/h2&gt;

&lt;p&gt;You can view DynamoDB single-table metrics using CloudWatch or the SenseDeep Serverless Platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Viewing via SenseDeep
&lt;/h3&gt;

&lt;p&gt;SenseDeep has pre-configured dashboards and graphs to assist in visualizing your DynamoDB metrics. You can drill down and view metrics at the table, tenant, source, index, model or operation dimension level for any desired time period.&lt;/p&gt;

&lt;p&gt;It is easy to see which application or function is consuming read/write capacity and how your app data entities are using DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vnrs8b56rf8u1dk3skk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vnrs8b56rf8u1dk3skk.png" alt="Single Table" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SenseDeep also provides intuitive capacity planning and provisioning assistance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.sensedeep.com%2F%2Fimages%2Fsensedeep%2Ftable-provisioning.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.sensedeep.com%2F%2Fimages%2Fsensedeep%2Ftable-provisioning.png" alt="Provisioning Table" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudWatch
&lt;/h2&gt;

&lt;p&gt;Using &lt;a href="https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#metricsV2:graph=~()" rel="noopener noreferrer"&gt;CloudWatch Metrics&lt;/a&gt;, you can see cards for the DynamoDB Metrics dimension combinations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fowv9zi8wlyvqbefmv9ld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fowv9zi8wlyvqbefmv9ld.png" alt="CloudWatch Dimensions" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The metrics are created under the &lt;code&gt;SingleTable/Metrics.1&lt;/code&gt; namespace. Clicking on a card provides a list of dimension combinations to graph.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3phjcuuljbf6ysnamwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz3phjcuuljbf6ysnamwv.png" alt="CloudWatch Metrics" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Under the Hood
&lt;/h2&gt;

&lt;p&gt;The DynamoDB Metrics library emits metrics using the &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format_Specification.html" rel="noopener noreferrer"&gt;CloudWatch EMF&lt;/a&gt; log-based metrics format. This permits zero-latency creation of metrics without impacting the performance of your Lambdas. EMF allows metrics to be emitted without blocking as would be the case with a normal blocking API.&lt;/p&gt;

&lt;p&gt;DynamoDB Metrics will only emit metrics for dimension combinations that are active. If you have many application entities and indexes, you may end up with a large number of metrics. If your site uses all these dimensions actively, your CloudWatch Metric costs may be high. You will be charged by AWS CloudWatch for the total number of metrics that are active each hour at the rate of $0.30 cents per hour.&lt;/p&gt;

&lt;p&gt;If your CloudWatch costs are too high, you can minimize your charges by reducing the number of dimensions. The dimensions emitted can be modified via the &lt;code&gt;dimensions&lt;/code&gt; constructor property. Alternatively, you can dynamically enable and disable metrics via the LOG_FILTER parameter.&lt;/p&gt;

&lt;p&gt;DynamoDB Metrics are buffered and aggregated to minimize the load on your system. If a Lambda function is reclaimed by AWS Lambda, there may be a few metric requests that are not emitted before the function is reclaimed. This should be a very small percentage and should not significantly impact the quality of the metrics. You can control this buffering via the Metrics constructor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Gaining insight into single-table design patterns is the new frontier. Previously, single-table designs with DynamoDB has been a black box and it has been difficult to peer inside and see how the components of your design are operating.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.npmjs.com/package/dynamodb-metrics" rel="noopener noreferrer"&gt;DynamoDB Metrics&lt;/a&gt; provides an easy way to instrument your code and gain these insights. SenseDeep provides a free developer plan so you can view and analyze these metrics with graphical dashboards.&lt;/p&gt;

&lt;h2&gt;
  
  
  More?
&lt;/h2&gt;

&lt;p&gt;Download &lt;a href="https://www.npmjs.com/package/dynamodb-metrics" rel="noopener noreferrer"&gt;DynamoDB Metrics&lt;/a&gt; from NPM at &lt;a href="https://www.npmjs.com/package/dynamodb-metrics" rel="noopener noreferrer"&gt;DynamoDB Metrics&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the most elegant way to create single-table designs, consider &lt;a href="https://www.npmjs.com/package/dynamodb-onetable" rel="noopener noreferrer"&gt;OneTable&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And you can get a free developer license for SenseDeep at &lt;a href="https://app.sensedeep.com" rel="noopener noreferrer"&gt;SenseDeep App&lt;/a&gt; or learn more at &lt;a href="https://www.sensedeep.com" rel="noopener noreferrer"&gt;https://www.sensedeep.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You may also like to read:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/posts/2021/dynamodb-singletable-design.html" rel="noopener noreferrer"&gt;Data Modeling with Single Table Designs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/posts/2020/dynamodb-onetable.html" rel="noopener noreferrer"&gt;DynamoDB OneTable&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/blog/posts/2021/dynamodb-schemas.html" rel="noopener noreferrer"&gt;DynamoDB with OneTable Schemas&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  About SenseDeep
&lt;/h2&gt;

&lt;p&gt;SenseDeep is an observability platform for AWS developers to accelerate the delivery and maintenance of serverless applications.&lt;/p&gt;

&lt;p&gt;SenseDeep helps developers through the entire lifecycle to create observable, reliable and maintainable apps via an integrated serverless developer studio that includes deep insights into how your apps are performing.&lt;/p&gt;

&lt;p&gt;To try SenseDeep, navigate your browser to: &lt;a href="https://app.sensedeep.com/" rel="noopener noreferrer"&gt;https://app.sensedeep.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To learn more about SenseDeep please see: &lt;a href="https://www.sensedeep.com/product/" rel="noopener noreferrer"&gt;https://www.sensedeep.com/product&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Please let us know what you think, we thrive on feedback. &lt;a href="//mailto:dev@sensedeep.com"&gt;dev@sensedeep.com&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Links
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep Web Site&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://app.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep App&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>dynamodb</category>
      <category>observability</category>
    </item>
    <item>
      <title>Dynamic Log Control for Serverless</title>
      <dc:creator>Michael O'Brien</dc:creator>
      <pubDate>Thu, 19 Aug 2021 22:48:12 +0000</pubDate>
      <link>https://dev.to/embedthis/dynamic-log-control-for-serverless-4608</link>
      <guid>https://dev.to/embedthis/dynamic-log-control-for-serverless-4608</guid>
      <description>&lt;p&gt;Serverless apps are an immensely powerful way to build highly scalable and available services.&lt;br&gt;
But, serverless apps are also uniquely difficult to monitor, debug and manage due to their distributed components, stateless nature, short execution lifespans and limited access to configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Observability" rel="noopener noreferrer"&gt;Observability&lt;/a&gt; has been popularized as the solution to this dilemma by "instrumenting your functions verbosely" and "wrapping all service boundaries with copious logging".&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;However, this can degrade critical production performance and send logging and metric costs spiraling.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So what is the solution?&lt;/p&gt;

&lt;p&gt;This post explores a simple, yet effective solution via &lt;strong&gt;Dynamic Log Control&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Observability
&lt;/h2&gt;

&lt;p&gt;Distributed serverless systems need to be easily monitored for current and future failures. To achieve this, such systems need to be "Observable".&lt;/p&gt;

&lt;p&gt;Observability aims to provide granular insights into the behavior of a system with rich context to diagnose current errors and anticipate potential failures. Observable systems integrate &lt;em&gt;telemetry&lt;/em&gt; to monitor the system via logs, metrics, events and traces.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Easy you say, just add lots of logging and metrics.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However the painful tradeoff with observability is the hard cost of the telemetry vs the potential benefit of that observability in the future. Each line of logging costs in terms of CPU and log storage. Copious logging can lead to rude logging cost surprises.&lt;/p&gt;

&lt;p&gt;What is needed is a way to minimize the cost of telemetry until needed and to scale it up and down according to the need.&lt;/p&gt;

&lt;p&gt;Of course, you need a baseline level of logging and metrics so you can have an accurate assessment of your services. But you often need greater insight in response to certain alarms or triggers.&lt;/p&gt;
&lt;h2&gt;
  
  
  Dynamic Log Control
&lt;/h2&gt;

&lt;p&gt;Serverless apps pose challenges for the dynamic scaling up and down of logging.&lt;/p&gt;

&lt;p&gt;Serverless functions are ephemeral. They are not like an EC2 server or long lived docker applications. They often last only milliseconds before terminating. They keep little or no state and fetching state from a database may be too great an overhead to incur on every function invocation.&lt;/p&gt;

&lt;p&gt;However, there is a technique that can be adapted for dynamic log control that is well proven and understood: &lt;strong&gt;Environment Variables&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Environment Control
&lt;/h2&gt;

&lt;p&gt;When AWS Lambda loads a function, it provides a set of environment variables that can be read by the function with zero delay. Via these variables, you can provide log control instructions to vary the amount and focus of your logging.&lt;/p&gt;

&lt;p&gt;Here is the AWS console's Lambda environment configuration page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkck63kcdgecjvgw14t1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkck63kcdgecjvgw14t1e.png" alt="Console" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you change the environment variables (without redeploying or modifying your function), your next invocation will run with the new environment values.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So using special LOG environment variables, we can communicate our desired log level, filtering and sampling, and have our functions respond to these settings without modifying code or redeploying.&lt;br&gt;
Sweet!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What are the downsides: Changing environment variables will incur a cold-start the next time the functions are invoked, but that is typically a short, one-off delay.&lt;/p&gt;
&lt;h2&gt;
  
  
  Log Control
&lt;/h2&gt;

&lt;p&gt;To control our logging, we propose three log environment variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LOG_FILTER&lt;/li&gt;
&lt;li&gt;LOG_OVERRIDE&lt;/li&gt;
&lt;li&gt;LOG_SAMPLE&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Depending on your log library (SenseLogs, Pino, Winston, ...) you specify your log level or channel via these variables.&lt;/p&gt;

&lt;p&gt;The LOG_FILTER defines the set of enabled log levels or channels. Enabled levels will emit output. Disabled levels will be silent.&lt;br&gt;
For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LOG_FILTER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'error,warn'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will enable logging output in your Lambdas for &lt;code&gt;log.error()&lt;/code&gt; and &lt;code&gt;log.warn()&lt;/code&gt; but disable output for calls to &lt;code&gt;trace&lt;/code&gt; or &lt;code&gt;debug&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The LOG_OVERRIDE will replace the LOG_FILTER for a given period of time before reverting to the LOG_FILTER settings. LOG_OVERRIDE is prefixed with an expiry time expressed as a Unix epoch in milliseconds since Jan 1 1970.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LOG_OVERRIDE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'1629806612164:debug,trace'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The LOG_SAMPLE is a sampling filter that applies to a given percentage of requests. The list of levels is is prefixed with a percentage.&lt;/p&gt;

&lt;p&gt;For example, this will trace 5% of requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;LOG_SAMPLE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'5%:trace'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Logging Libraries
&lt;/h2&gt;

&lt;p&gt;This technique will work with Node, Python and Java Lambda functions. It is easy to configure with most popular log libraries including: &lt;a href="https://www.npmjs.com/package/bunyan" rel="noopener noreferrer"&gt;Bunyan&lt;/a&gt;, &lt;a href="https://www.npmjs.com/package/bunyan" rel="noopener noreferrer"&gt;Debug&lt;/a&gt;, &lt;a href="https://www.npmjs.com/package/pino" rel="noopener noreferrer"&gt;Pino&lt;/a&gt;, &lt;a href="https://www.npmjs.com/package/senselogs" rel="noopener noreferrer"&gt;SenseLogs&lt;/a&gt; or &lt;a href="https://www.npmjs.com/package/winston" rel="noopener noreferrer"&gt;Winston&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;An ideal logging library will permit custom levels so that latent logging code can be embedded in your function without incurring a run-time overhead for most invocations. When required, you can add that custom level to your LOG_FILTER or LOG_OVERRIDE and turn on that custom log output. Custom levels should be able to be enabled and disabled without impacting other levels.&lt;/p&gt;

&lt;p&gt;It is very important that your logging library has extremely low overhead for disabled log levels. Otherwise, the benefit of dynamically scaling is lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementations
&lt;/h2&gt;

&lt;p&gt;I've included below some examples for NodeJS using the SenseLogs and Pino logging libraries.&lt;/p&gt;

&lt;p&gt;To see code samples for &lt;a href="https://www.npmjs.com/package/bunyan" rel="noopener noreferrer"&gt;Bunyan&lt;/a&gt;, &lt;a href="https://www.npmjs.com/package/debug" rel="noopener noreferrer"&gt;Debug&lt;/a&gt;, &lt;a href="https://www.npmjs.com/package/winston" rel="noopener noreferrer"&gt;Winston&lt;/a&gt; or Python please checkout our &lt;a href="https://www.sensedeep.com/blog/posts/samples/dynamic-log-control-with-log-libraries.html" rel="noopener noreferrer"&gt;Dynamic Logging Samples&lt;/a&gt; which has detailed code samples for each library and Python using these techniques.&lt;/p&gt;

&lt;p&gt;Also available via GitHub gists at: &lt;a href="https://gist.github.com/mobsense/479e4053e39c7f81d1d1a075e33de81e" rel="noopener noreferrer"&gt;Dynamic Log Control Gist&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SenseLogs&lt;/li&gt;
&lt;li&gt;Pino&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Controlling SenseLogs
&lt;/h2&gt;

&lt;p&gt;Here is a sample of how to use this technique with &lt;a href="https://www.npmjs.com/package/senselogs" rel="noopener noreferrer"&gt;SenseLogs&lt;/a&gt; which has builtin support to respond to the LOG_FILTER, LOG_OVERRIDE and LOG_SAMPLE environment variables. SenseLogs is an extremely fast serverless logger that supports custom log channels. It has almost zero cost for disabled log levels.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;SenseLogs&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;senselogs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;log&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SenseLogs&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;//  This will be emitted if LOG_FILTER contains 'debug' as a log level&lt;/span&gt;
    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello world&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;//  EMF metrics can also be dynamically controlled&lt;/span&gt;
    &lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;trace&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;AcmeRockets/CriticalLaunchFailure&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;explosion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Learn more at &lt;a href="https://www.sensedeep.com/blog/posts/senselogs/serverless-logging.html" rel="noopener noreferrer"&gt;Serverless Logging with SenseLogs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Controlling Pino
&lt;/h2&gt;

&lt;p&gt;Here is a sample of how to use the popular &lt;a href="https://www.npmjs.com/package/pino" rel="noopener noreferrer"&gt;Pino&lt;/a&gt; general purpose logger.&lt;/p&gt;

&lt;p&gt;This sample uses a small amount of code to parse the LOG environment variables and setup the Pino logger.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Pino&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pino&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;LOG_FILTER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LOG_OVERRIDE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LOG_SAMPLE&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;LOG_OVERRIDE&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;expire&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;level&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;LOG_OVERRIDE&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;level&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;expire&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;LOG_FILTER&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;level&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;sample&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sampleRate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sampleLevels&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;LOG_SAMPLE&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;sampleRate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;sampleLevel&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;LOG_SAMPLE&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pino&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Pino&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pino&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LOG_FILTER&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sampleRate&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sample&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;sampleRate&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;//  Apply the sample levels&lt;/span&gt;
        &lt;span class="nx"&gt;pino&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;sampleLevel&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;pino&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Debug message&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Managing Log Configuration
&lt;/h2&gt;

&lt;p&gt;When you have an important issue to diagnose, you can now scale up your logging by setting the LOG_OVERRIDE environment variable to increase the log level for a period of time. Depending on your log library, you may also be able to focus your logging on a specific module by using custom levels.&lt;/p&gt;

&lt;p&gt;You can modify your environment configuration via API, the AWS Console, the AWS SDK or using the &lt;a href="https://www.sensedeep.com/" rel="noopener noreferrer"&gt;SenseDeep&lt;/a&gt; Developer Studio.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS CLI
&lt;/h3&gt;

&lt;p&gt;The AWS CLI has support to update the function configuration using &lt;a href="https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-configuration.html" rel="noopener noreferrer"&gt;update-function-configuration&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;EXPIRES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;$((&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%s&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="m"&gt;3600&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="m"&gt;1000&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;
aws lambda update-function-configuration &lt;span class="nt"&gt;--function-name&lt;/span&gt; MyFunction &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--environment&lt;/span&gt; &lt;span class="s2"&gt;"Variables={LOG_OVERRIDE=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;EXPIRES&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:error,info,debug,trace,commerce}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This calculates an expiry time and overrides the log levels for one hour to include the error, info, debug and trace levels and the custom &lt;code&gt;commerce&lt;/code&gt; level.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Console
&lt;/h3&gt;

&lt;p&gt;You can also change these environment values via the AWS console.&lt;/p&gt;

&lt;p&gt;Simply navigate to your function, select &lt;code&gt;Configuration&lt;/code&gt; then &lt;code&gt;Environment Variables&lt;/code&gt; then &lt;code&gt;Edit&lt;/code&gt; and modify then save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkck63kcdgecjvgw14t1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkck63kcdgecjvgw14t1e.png" alt="Console" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  SenseDeep Serverless Studio
&lt;/h3&gt;

&lt;p&gt;Better still, the &lt;a href="https://www.sensedeep.com" rel="noopener noreferrer"&gt;SenseDeep Serverless Developer Studio&lt;/a&gt; provides an integrated, high-level way to manage these filter settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwgmo6vfykvhmeh2xopn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwgmo6vfykvhmeh2xopn.png" alt="Console" width="800" height="755"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Using this technique, you can easily and quickly scale up and down and focus your logging as your needs vary. You can keep a good base level of logging and metrics without breaking the bank, and then increase logging on-demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  More?
&lt;/h2&gt;

&lt;p&gt;The logging library code samples are available at &lt;a href="https://www.sensedeep.com/blog/posts/samples/dynamic-log-control-with-log-libraries.html" rel="noopener noreferrer"&gt;Log Control Samples&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;SenseLogs is available from &lt;a href="https://github.com/sensedeep/senselogs" rel="noopener noreferrer"&gt;GitHub SenseLogs&lt;/a&gt; or &lt;a href="https://www.npmjs.com/package/senselogs" rel="noopener noreferrer"&gt;NPM SenseLogs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And you can get a free developer license for SenseDeep at &lt;a href="https://app.sensedeep.com" rel="noopener noreferrer"&gt;SenseDeep App&lt;/a&gt; or learn more at &lt;a href="https://www.sensedeep.com" rel="noopener noreferrer"&gt;https://www.sensedeep.com&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&amp;lt;@ partial('about') @&amp;gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>cloud</category>
      <category>lambda</category>
    </item>
  </channel>
</rss>
