<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Brennon Loveless</title>
    <description>The latest articles on DEV Community by Brennon Loveless (@bloveless).</description>
    <link>https://dev.to/bloveless</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bloveless"/>
    <language>en</language>
    <item>
      <title>Test-Driven Development With The oclif Testing Library: Part Two</title>
      <dc:creator>Brennon Loveless</dc:creator>
      <pubDate>Thu, 11 Nov 2021 05:34:51 +0000</pubDate>
      <link>https://dev.to/bloveless/test-driven-development-with-the-oclif-testing-library-part-two-3aab</link>
      <guid>https://dev.to/bloveless/test-driven-development-with-the-oclif-testing-library-part-two-3aab</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/salesforcedevs/test-driven-development-with-the-oclif-testing-library-part-one-25h9"&gt;Part One&lt;/a&gt; of this series on the oclif testing library, we used a test-driven development approach to building our &lt;code&gt;time-tracker&lt;/code&gt; CLI. We talked about the &lt;a href="https://oclif.io/"&gt;oclif framework&lt;/a&gt;, which helps developers dispense with the setup and boilerplate so that they can get to writing the meat of their CLI applications. We also talked about &lt;a href="https://github.com/oclif/test"&gt;@oclif/test&lt;/a&gt; and &lt;a href="https://github.com/oclif/fancy-test"&gt;@oclif/fancy-test&lt;/a&gt;, which take care of the repetitive setup and teardown so that developers can focus on writing their Mocha tests.&lt;/p&gt;

&lt;p&gt;Our &lt;code&gt;time-tracker&lt;/code&gt; application is a &lt;a href="https://oclif.io/docs/multi"&gt;multi-command CLI&lt;/a&gt;. We’ve already written tests and implemented our first command for adding a new project to our tracker. Next, we’re going to write tests and implement our “start timer” command.&lt;/p&gt;

&lt;p&gt;Just as a reminder, the final application is posted on &lt;a href="https://github.com/bloveless/oclif-time-tracker"&gt;GitHub&lt;/a&gt; as a reference in case you hit a roadblock.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Test for the Start Timer Command
&lt;/h2&gt;

&lt;p&gt;Now that we can add a new project to our time tracker, we need to be able to start the timer for that project. The command usage would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;time-tracker start-timer project-one
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we’re taking a TDD approach, we’ll start by writing the test. For our happy path test,  "project-one" already exists, and we can simply start the timer for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// PATH: test/commands/start-timer.test.js

const {expect, test} = require('@oclif/test')
const StartTimerCommand = require('../../src/commands/start-timer')
const MemoryStorage = require('../../src/storage/memory')
const {generateDb} = require('../test-helpers')

const someDate = 1631943984467

describe('start timer', () =&amp;gt; {
  test
  .stdout()
  .stub(StartTimerCommand, 'storage', new MemoryStorage(generateDb('project-one')))
  .stub(Date, 'now', () =&amp;gt; someDate)
  .command(['start-timer', 'project-one'])
  .it('should start a timer for "project-one"', async ctx =&amp;gt; {
    expect(await StartTimerCommand.storage.load()).to.eql({
      activeProject: 'project-one',
      projects: {
        'project-one': {
          activeEntry: 0,
          entries: [
            {
              startTime: new Date(someDate),
              endTime: null,
            },
          ],
        },
      },
    })
    expect(ctx.stdout).to.contain('Started a new time entry on "project-one"')
  })
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a lot of similarity between this test and the first test of our “add project” command. One difference, however, is the additional &lt;code&gt;stub()&lt;/code&gt; call. Since we will start the timer with &lt;code&gt;new Date(Date.now())&lt;/code&gt;, our test code will preemptively stub out &lt;code&gt;Date.now()&lt;/code&gt; to return &lt;code&gt;someDate&lt;/code&gt;. Though we don’t care what the value of &lt;code&gt;someDate&lt;/code&gt; is, what’s important is that it is fixed.&lt;/p&gt;

&lt;p&gt;When we run our test, we get the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Cannot find module '../../src/commands/start-timer'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s time to write some implementation code!&lt;/p&gt;

&lt;h2&gt;
  
  
  Beginning to Implement the Start Time Command
&lt;/h2&gt;

&lt;p&gt;We need to create a file for our &lt;code&gt;start-timer&lt;/code&gt; command. We duplicate the &lt;code&gt;add-project.js&lt;/code&gt; file and rename it as &lt;code&gt;start-timer.js&lt;/code&gt;. We clear out most of the &lt;code&gt;run&lt;/code&gt; method, and we rename the command class to &lt;code&gt;StartTimerCommand&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const {Command, flags} = require('@oclif/command')
const FilesystemStorage = require('../storage/filesystem')

class StartTimerCommand extends Command {
  async run() {
    const {args} = this.parse(StartTimerCommand)
    const db = await StartTimerCommand.storage.load()

    await StartTimerCommand.storage.save(db)
  }
}

StartTimerCommand.storage = new FilesystemStorage()

StartTimerCommand.description = `Start a new timer for a project`

StartTimerCommand.flags = {
  name: flags.string({char: 'n', description: 'name to print'}),
}

module.exports = StartTimerCommand
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, when we run the test again, we see that the &lt;code&gt;db&lt;/code&gt; has not been updated as we had expected.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1) start timer
       should start a timer for "project-one":

      AssertionError: expected { Object (activeProject, projects) } to deeply equal { Object (activeProject, projects) }
      + expected - actual

       {
      -  "activeProject": [null]
      +  "activeProject": "project-one"
         "projects": {
           "project-one": {
      -      "activeEntry": [null]
      -      "entries": []
      +      "activeEntry": 0
      +      "entries": [
      +        {
      +          "endTime": [null]
      +          "startTime": [Date: 2021-09-18T05:46:24.467Z]
      +        }
      +      ]
           }
         }
       }

      at Context.&amp;lt;anonymous&amp;gt; (test/commands/start-timer.test.js:16:55)
      at async Object.run (node_modules/fancy-test/lib/base.js:44:29)
      at async Context.run (node_modules/fancy-test/lib/base.js:68:25)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While we’re at it, we also know that we should be logging something to tell the user what just happened. So let's update the run method with code to do that.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const {args} = this.parse(StartTimerCommand)
const db = await StartTimerCommand.storage.load()

if (db.projects &amp;amp;&amp;amp; db.projects[args.projectName]) {
    db.activeProject = args.projectName
    // Set the active entry before we push so we can take advantage of the fact
    // that the current length is the index of the next insert
    db.projects[args.projectName].activeEntry = db.projects[args.projectName].entries.length
    db.projects[args.projectName].entries.push({startTime: new Date(Date.now()), endTime: null})
}

this.log(`Started a new time entry on "${args.projectName}"`)

await StartTimerCommand.storage.save(db)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the test again, we see that our tests are all passing!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;add project
    ✓ should add a new project
    ✓ should return an error if the project already exists (59ms)

start timer
    ✓ should start a timer for "project-one"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Sad Path: Starting a Timer on a Non-Existent Project
&lt;/h2&gt;

&lt;p&gt;Next, we should notify the user if they attempt to start a timer on a project that doesn't exist. Let's start by writing a test for this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test
  .stdout()
  .stub(StartTimerCommand, 'storage', new MemoryStorage(generateDb('project-one')))
  .stub(Date, 'now', () =&amp;gt; someDate)
  .command(['start-timer', 'project-does-not-exist'])
  .catch('Project "project-does-not-exist" does not exist')
  .it('should return an error if the user attempts to start a timer on a project that doesn\'t exist', async _ =&amp;gt; {
    // Expect that the storage is unchanged
    expect(await StartTimerCommand.storage.load()).to.eql({
      activeProject: null,
      projects: {
        'project-one': {
          activeEntry: null,
          entries: [],
        },
      },
    })
  })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And, we are failing again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1 failing

  1) start timer
       should return an error if the user attempts to start a timer on a project that doesn't exist:
     Error: expected error to be thrown
      at Object.run (node_modules/fancy-test/lib/catch.js:8:19)
      at Context.run (node_modules/fancy-test/lib/base.js:68:36)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's write some code to fix that error. We add the following snippet of code to the beginning of the &lt;code&gt;run&lt;/code&gt; method, right after we load the &lt;code&gt;db&lt;/code&gt; from storage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (!db.projects?.[args.projectName]) {
    this.error(`Project "${args.projectName}" does not exist`)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We run the tests again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;add project
    ✓ should add a new project (47ms)
    ✓ should return an error if the project already exists (75ms)

start timer
    ✓ should start a timer for "project-one"
    ✓ should return an error if the user attempts to start a timer on a project that doesn't exist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nailed it! Of course, there is one more thing that this command should do. Let's imagine that we've already started a timer on &lt;code&gt;project-one&lt;/code&gt; and we want to quickly switch the timer to &lt;code&gt;project-two&lt;/code&gt;. We'd expect that the running timer on &lt;code&gt;project-one&lt;/code&gt; will stop and a new timer on &lt;code&gt;project-two&lt;/code&gt; will begin.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop One Timer, Start Another
&lt;/h2&gt;

&lt;p&gt;We repeat our TDD red-green cycle by first writing a test to represent the missing functionality.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test
  .stdout()
  .stub(StartTimerCommand, 'storage', new MemoryStorage({
    activeProject: 'project-one',
    projects: {
      'project-one': {
        activeEntry: 0,
        entries: [
          {
            startTime: new Date(someStartDate),
            endTime: null,
          },
        ],
      },
      'project-two': {
        activeEntry: null,
        entries: [],
      },
    },
  }))
  .stub(Date, 'now', () =&amp;gt; someDate)
  .command(['start-timer', 'project-two'])
  .it('should end the running timer from another project before starting a timer on the requested one', async ctx =&amp;gt; {
    // Expect that the storage is unchanged
    expect(await StartTimerCommand.storage.load()).to.eql({
      activeProject: 'project-two',
      projects: {
        'project-one': {
          activeEntry: null,
          entries: [
            {
              startTime: new Date(someStartDate),
              endTime: new Date(someDate),
            },
          ],
        },
        'project-two': {
          activeEntry: 0,
          entries: [
            {
              startTime: new Date(someDate),
              endTime: null,
            },
          ],
        },
      },
    })

    expect(ctx.stdout).to.contain('Started a new time entry on "project-two"')
  })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This test requires another timestamp, which we call &lt;code&gt;someStartDate&lt;/code&gt;. We add that near the top of our &lt;code&gt;start-timer.test.js&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
const someStartDate = 1631936940178
const someDate = 1631943984467
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This test is longer than the other tests, but that’s because we needed a very specific &lt;code&gt;db&lt;/code&gt; initialized within MemoryStorage to represent this test case. You can see that, initially, we have an entry with a &lt;code&gt;startTime&lt;/code&gt; and no &lt;code&gt;endTime&lt;/code&gt; in &lt;code&gt;project-one&lt;/code&gt;. In the assertion, you'll notice that the &lt;code&gt;endTime&lt;/code&gt; in &lt;code&gt;project-one&lt;/code&gt; is populated, and there is a new active entry in &lt;code&gt;project-two&lt;/code&gt; with a &lt;code&gt;startTime&lt;/code&gt; and no &lt;code&gt;endTime&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When we run our test suite, we see the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1) start timer
       should end the running timer from another project before starting a timer on the requested one:

      AssertionError: expected { Object (activeProject, projects) } to deeply equal { Object (activeProject, projects) }
      + expected - actual

       {
         "activeProject": "project-two"
         "projects": {
           "project-one": {
      -      "activeEntry": 0
      +      "activeEntry": [null]
             "entries": [
               {
      -          "endTime": [null]
      +          "endTime": [Date: 2021-09-18T05:46:24.467Z]
                 "startTime": [Date: 2021-09-18T03:49:00.178Z]
               }
             ]
           }

      at Context.&amp;lt;anonymous&amp;gt; (test/commands/start-timer.test.js:76:55)
      at async Object.run (node_modules/fancy-test/lib/base.js:44:29)
      at async Context.run (node_modules/fancy-test/lib/base.js:68:25)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This error tells us that our CLI correctly created a new entry in &lt;code&gt;project-two&lt;/code&gt;, but it didn't first end the timer on &lt;code&gt;project-one&lt;/code&gt;. Our application also didn't change the &lt;code&gt;activeEntry&lt;/code&gt; from &lt;code&gt;0&lt;/code&gt; to &lt;code&gt;null&lt;/code&gt; in &lt;code&gt;project-one&lt;/code&gt; as we expected.&lt;/p&gt;

&lt;p&gt;Let's fix up the code to solve this issue. Right after we check that the requested project exists, we can add this block of code which will end a running timer on another project and unset the &lt;code&gt;activeEntry&lt;/code&gt; in that project, and it does that all before we create a new timer on the requested project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Check to see if there is a timer running on another project and end it
if (db.activeProject &amp;amp;&amp;amp; db.activeProject !== args.projectName) {
    db.projects[db.activeProject].entries[db.projects[db.activeProject].activeEntry].endTime = new Date(Date.now())
    db.projects[db.activeProject].activeEntry = null
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And there we have it! All our tests are passing once again!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;add project
    ✓ should add a new project (47ms)
    ✓ should return an error if the project already exists (72ms)

  start timer
    ✓ should start a timer for "project-one"
    ✓ should return an error if the user attempts to start a timer on a project that doesn't exist
    ✓ should end the running timer from another project before starting a timer on the requested one
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you’ve been tracking with our CLI development over Part One and Part Two of this oclif testing series, you’ll see that we’ve covered the &lt;code&gt;add-project&lt;/code&gt; and &lt;code&gt;start-timer&lt;/code&gt; commands. We’ve been demonstrating how easy it is to use TDD to build these commands with &lt;code&gt;oclif&lt;/code&gt; and &lt;code&gt;@oclif/test&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Because the &lt;code&gt;end-timer&lt;/code&gt; and &lt;code&gt;list-projects&lt;/code&gt; commands are so similar to what we’ve already walked through, we’ll leave their development using TDD as an exercise for the reader. The &lt;a href="https://github.com/bloveless/oclif-time-tracker"&gt;project repository&lt;/a&gt; has those commands implemented as well as the tests used to validate the implementation.&lt;/p&gt;

&lt;p&gt;In summary, we laid out plans for using TDD to build a CLI application using the oclif framework. We spent some time getting to know the &lt;code&gt;@oclif/test&lt;/code&gt; package and some of the helpers provided by that library. Specifically, we talked about: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using the &lt;code&gt;command&lt;/code&gt; method for calling our command and passing it arguments&lt;/li&gt;
&lt;li&gt;Methods provided by &lt;code&gt;@oclif/fancy-test&lt;/code&gt; for stubbing parts of our application, catching errors, mocking stdout and stderr, and asserting on those results&lt;/li&gt;
&lt;li&gt;Using TDD to build out a large portion of a CLI using a red-green cycle by writing tests first and then writing the minimal amount of code to get our tests to pass&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just like that… you've got another tool in your dev belt—this time, for writing and testing your own CLIs!&lt;/p&gt;

</description>
      <category>oclif</category>
      <category>javascript</category>
      <category>tdd</category>
      <category>cli</category>
    </item>
    <item>
      <title>Running Concurrent Requests with async/await and Promise.all</title>
      <dc:creator>Brennon Loveless</dc:creator>
      <pubDate>Tue, 18 May 2021 02:55:31 +0000</pubDate>
      <link>https://dev.to/bloveless/running-concurrent-requests-with-async-await-and-promise-all-4gb1</link>
      <guid>https://dev.to/bloveless/running-concurrent-requests-with-async-await-and-promise-all-4gb1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this article I’d like to touch on async, await, and Promise.all in JavaScript. First, I’ll talk about concurrency vs parallelism and why we will be targeting parallelism in this article. Then, I’ll talk about how to use async and await to implement a parallel algorithm in serial and how to make it work in parallel by using Promise.all. Finally, I’ll create an example project using Salesforce’s Lightning Web Components where I will build an art gallery using Harvard’s Art Gallery API.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concurrency Vs Parallelism
&lt;/h2&gt;

&lt;p&gt;I want to quickly touch on the difference between concurrency and parallelism. You can relate concurrency to how a single-threaded CPU processes multiple tasks. Single-threaded CPUs emulate parallelism by switching between processes quickly enough that it seems like multiple things are happening at the same time. Parallelism is when a CPU has multiple cores and can actually run two tasks at the exact same time. Another great example is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Concurrency is two lines of customers ordering from a single cashier (lines take turns ordering); Parallelism is two lines of customers ordering from two cashiers (each line gets its own cashier). [1]&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Knowing this difference helps us consider what options we have from an algorithmic standpoint. Our goal is to make these HTTP requests in parallel. Due to some limitations in JavaScript implementation and browser variability, we don’t actually get to determine if our algorithm will be run concurrently or in parallel. Luckily, I don’t need to change our algorithm at all. The underlying JavaScript event loop will make it seem like the code is running in parallel, which is good enough for this article!&lt;/p&gt;

&lt;h2&gt;
  
  
  Async/Await in Serial
&lt;/h2&gt;

&lt;p&gt;In order to understand this &lt;em&gt;parallel&lt;/em&gt; algorithm, I’ll first use async and await to build a &lt;em&gt;serial&lt;/em&gt; algorithm. If you write this code in an IDE, you’ll likely get a notification saying that using await in a loop is a missed optimization opportunity — and your IDE would be correct.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(async () =&amp;gt; {
    const urls = [
        "https://example.com/posts/1/",
        "https://example.com/posts/1/tags/",
    ];

    const data = [];
  for (url of urls) {
    await fetch(url)
      .then((response) =&amp;gt; response.json())
      .then((jsonResponse) =&amp;gt; data.push(jsonResponse));
  }

  console.log(data);
})();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;One reason that you might implement an algorithm like this is if you need to get the data from two different URLs, then blend that data together to create your final object. In the code above, you can imagine that we are gathering some data about a post, then grabbing the data about the post's tags, and finally merging that data into the object you’d actually use later on.&lt;/p&gt;

&lt;p&gt;While this code will work, you might notice that we &lt;code&gt;await&lt;/code&gt; on each fetch. You'll see something like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Start to fetch post one&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wait for fetch post one to complete&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Get post one response&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start to fetch post one tags&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wait for post one tags to complete&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Get post one tags response&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem is we’re waiting serially for each network request to complete before starting the next request. There’s no need for this: Computers are perfectly capable of executing more than one network request at the same time.&lt;/p&gt;

&lt;p&gt;So how can we make this algorithm better?&lt;/p&gt;

&lt;h2&gt;
  
  
  Async/Await in Parallel
&lt;/h2&gt;

&lt;p&gt;The easiest way to make this algorithm faster is to remove the &lt;code&gt;await&lt;/code&gt; keyword before the &lt;code&gt;fetch&lt;/code&gt; command. This will tell JavaScript to start the execution of all the requests in parallel. But in order to pause execution and wait for all of the promises to return, we need to await on something. We'll use &lt;code&gt;Promise.all&lt;/code&gt; to do just that.&lt;/p&gt;

&lt;p&gt;When we use &lt;code&gt;await Promise.all&lt;/code&gt;, JavaScript will wait for the entire array of promises passed to &lt;code&gt;Promise.all&lt;/code&gt; to resolve. Only then will it return all the results at the same time. A rewrite looks like this:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(async () =&amp;gt; {
    const urls = [
        "https://example.com/posts/1/",
        "https://example.com/posts/1/tags/",
    ];

    const promises = urls.map((url) =&amp;gt;
        fetch(url).then((response) =&amp;gt; response.json())
    );

    const data = await Promise.all(promises);

    console.log(data);
})();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This code will map each URL into a &lt;code&gt;promise&lt;/code&gt; and then &lt;code&gt;await&lt;/code&gt; for all of those promises to complete. Now when we pass the &lt;code&gt;await Promise.all&lt;/code&gt; portion of the code, we can be sure that both fetch requests have resolved and the responses are in the data array in the correct position. So &lt;code&gt;data[0]&lt;/code&gt; will be our post data and &lt;code&gt;data[1]&lt;/code&gt; will be our tags data.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Example
&lt;/h2&gt;

&lt;p&gt;Now that we have all the necessary building blocks to implement our pre-fetched image gallery, let’s build it.&lt;/p&gt;

&lt;p&gt;Below is a screenshot of the app I built for this article, and here is the link to the documentation about the &lt;a href="https://github.com/harvardartmuseums/api-docs" rel="noopener noreferrer"&gt;Harvard Art Museum API docs&lt;/a&gt; [2]. You’ll need to apply for your own API key if you’d like to follow along. The process seemed pretty automatic to me since you just fill out a Google Form and then receive your API key in your email instantly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3752%2F0%2AGy4It1GP29PVgWjQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F3752%2F0%2AGy4It1GP29PVgWjQ.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It doesn’t look like much, but as you navigate through the gallery, it pre-fetches the next pages of data automatically. That way, the user viewing the gallery shouldn’t see any loading time for the actual data. The images are only loaded when they are displayed on the page. And while those do load after the fact, the actual data for the page is loaded instantly since it is cached in the component. Finally, as a challenge to myself, I’m using Salesforce’s Lightning Web Components for this project — a completely new technology to me. Let’s get into building the component.&lt;/p&gt;

&lt;p&gt;Here are some of the resources that I used while learning about Lightning Web Components. If you’d like to follow along, then you’ll at least need to set up your local dev environment and create a “hello world” Lightning Web Component.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://trailhead.salesforce.com/content/learn/projects/quick-start-lightning-web-components/set-up-salesforce-dx" rel="noopener noreferrer"&gt;Setup A Local Development Environment&lt;/a&gt; [3]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://trailhead.salesforce.com/content/learn/projects/quick-start-lightning-web-components/create-a-hello-world-lightning-web-component" rel="noopener noreferrer"&gt;Create a Hello World Lightning Web Component&lt;/a&gt; [4]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://trailhead.salesforce.com/sample-gallery" rel="noopener noreferrer"&gt;LWC Sample Gallery&lt;/a&gt; [5]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.salesforce.com/docs/component-library/overview/components" rel="noopener noreferrer"&gt;LWC Component Reference&lt;/a&gt; [6]&lt;/p&gt;

&lt;p&gt;Alright, now that your environment is set up and you’ve created your first LWC, let’s get started. By the way, all the code for this article can be found at &lt;a href="https://github.com/bloveless/AsyncAwaitPromiseAllLWC" rel="noopener noreferrer"&gt;my GitHub repo&lt;/a&gt; [7].&lt;/p&gt;

&lt;p&gt;A quick aside: Lightning Web Components are a little more limited than components you might be used to if you are coming from a React background. For example, you can’t use JavaScript expressions in component properties, i.e. the image src, in the following example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;template for:each={records} for:item="record"&amp;gt;
    &amp;lt;img src={record.images[0].baseimageurl}&amp;gt;
&amp;lt;/template&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The reason for that is when you force all of your code to happen in the JavaScript files rather than in the HTML template files, your code becomes much easier to test. So let’s chalk this up to “it’s better for testing” and move on with our lives.&lt;/p&gt;

&lt;p&gt;In order to create this gallery, we’ll need to build two components. The first component is for displaying each gallery image, and the second component is for pre-fetching and pagination.&lt;/p&gt;

&lt;p&gt;The first component is the simpler of the two. In VSCode, execute the command &lt;code&gt;SFDX: Create Lightning Web Component&lt;/code&gt; and name the component &lt;code&gt;harvardArtMuseumGalleryItem&lt;/code&gt;. This will create three files for us: an HTML, JavaScript, and XML file. This component will not need any changes to the XML file since the item itself isn't visible in any Salesforce admin pages.&lt;/p&gt;

&lt;p&gt;Next, change the contents of the HTML file to the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# force-app/main/default/lwc/harvardArtMuseumGalleryItem/harvardArtMuseumGalleryItem.html

&amp;lt;template&amp;gt;
    &amp;lt;div class="gallery-item" style={backgroundStyle}&amp;gt;&amp;lt;/div&amp;gt;
    {title}
&amp;lt;/template&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Note that in this HTML file, the style property is set to &lt;code&gt;{backgroundStyle}&lt;/code&gt; which is a function in our JavaScript file, so let's work on that one.&lt;/p&gt;

&lt;p&gt;Change the contents of the JS file to the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# force-app/main/default/lwc/harvardArtMuseumGalleryItem/harvardArtMuseumGalleryItem.js

import { LightningElement, api } from 'lwc';

export default class HarvardArtMuseumGalleryItem extends LightningElement {
    @api
    record;

    get image() {
        if (this.record.images &amp;amp;&amp;amp; this.record.images.length &amp;gt; 0) {
            return this.record.images[0].baseimageurl;
        }

        return "";
    }

    get title() {
        return this.record.title;
    }

    get backgroundStyle() {
        return `background-image:url('${this.image}');`
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There are a few things to notice here. First, the record property is decorated with &lt;code&gt;@api&lt;/code&gt; which allows us to assign to this property from other components. Keep an eye out for this record property on the main gallery component. Also, since we can't have JavaScript expressions in our HTML files, I've also brought the background image inline CSS into the JavaScript file. This allows me to use string interpolation with the image. The image function is nothing special as it is — just an easy way for me to get the first image URL from the record that we received from the Harvard Art Gallery API.&lt;/p&gt;

&lt;p&gt;Our final step of this component is to add a CSS file that wasn’t created for us automatically. So create &lt;code&gt;harvardArtMuseumGalleryItem.css&lt;/code&gt; in the harvardArtMuseumGalleryItem directory. You don't need to tell the application to use this file as it is included automatically just by its existence.&lt;/p&gt;

&lt;p&gt;Change the contents of your newly created CSS file to the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# force-app/main/default/lwc/harvardArtMuseumGalleryItem/harvardArtMuseumGalleryItem.css

.gallery-item {
    height: 150px;
    width: 100%;
    background-size: cover;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now that our busy work is out of the way, we can get to the actual gallery.&lt;/p&gt;

&lt;p&gt;Run &lt;code&gt;SFDX: Create Lightning Web Component&lt;/code&gt; in VSCode again and name the component &lt;code&gt;harvardArtMuseumGallery&lt;/code&gt;. This will, once again, generate our HTML, JavaScript, and XML files. We need to pay close attention to the XML file this time. The XML file is what tells Salesforce where our component is allowed to be located as well as how we will store our API key in the component.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# force-app/main/default/lwc/harvardArtMuseumGallery/harvardArtMuseumGallery.js-meta.xml

&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;
&amp;lt;LightningComponentBundle xmlns="&amp;lt;http://soap.sforce.com/2006/04/metadata&amp;gt;"&amp;gt;
    &amp;lt;apiVersion&amp;gt;51.0&amp;lt;/apiVersion&amp;gt;
    &amp;lt;isExposed&amp;gt;true&amp;lt;/isExposed&amp;gt;
    &amp;lt;targets&amp;gt;
        &amp;lt;target&amp;gt;lightning__HomePage&amp;lt;/target&amp;gt;
    &amp;lt;/targets&amp;gt;
    &amp;lt;targetConfigs&amp;gt;
        &amp;lt;targetConfig targets="lightning__HomePage"&amp;gt;
            &amp;lt;property name="harvardApiKey" type="String" default=""&amp;gt;&amp;lt;/property&amp;gt;
        &amp;lt;/targetConfig&amp;gt;
    &amp;lt;/targetConfigs&amp;gt;
&amp;lt;/LightningComponentBundle&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There are three key things to pay attention to in this XML file. The first is &lt;code&gt;isExposed&lt;/code&gt; which will allow our component to be found in the Salesforce admin. The second is the &lt;code&gt;target&lt;/code&gt; which says which areas of the Salesforce site our component can be used. This one says that we are allowing our component to be displayed on HomePage type pages. Finally, the &lt;code&gt;targetConfigs&lt;/code&gt; section will display a text box when adding the component. There, we can paste our API key (as seen in the following screenshot). You can find more information about this XML file &lt;a href="https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_lightningcomponentbundle.htm" rel="noopener noreferrer"&gt;here&lt;/a&gt; [8].&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4572%2F0%2A2a1gPSa3HtiMcm_0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn-images-1.medium.com%2Fmax%2F4572%2F0%2A2a1gPSa3HtiMcm_0.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let’s take care of the HTML and CSS files.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# force-app/main/default/lwc/harvardArtMuseumGallery/harvardArtMuseumGallery.html

&amp;lt;template&amp;gt;
    &amp;lt;lightning-card title="HelloWorld" icon-name="custom:custom14"&amp;gt;
        &amp;lt;div class="slds-m-around_medium"&amp;gt;
          &amp;lt;h1&amp;gt;Harvard Gallery&amp;lt;/h1&amp;gt;
          &amp;lt;div class="gallery-container"&amp;gt;
            &amp;lt;template for:each={records} for:item="record"&amp;gt;
              &amp;lt;div key={record.index} class="row"&amp;gt;
                &amp;lt;template for:each={record.value} for:item="item"&amp;gt;
                  &amp;lt;c-harvard-art-museum-gallery-item if:true={item} key={item.id} record={item}&amp;gt;&amp;lt;/c-harvard-art-museum-gallery-item&amp;gt;
                &amp;lt;/template&amp;gt;
              &amp;lt;/div&amp;gt;
            &amp;lt;/template&amp;gt;
          &amp;lt;/div&amp;gt;
          &amp;lt;div class="pagination-container"&amp;gt;
            &amp;lt;button type="button" onclick={previousPage}&amp;gt;&amp;amp;lt;&amp;lt;/button&amp;gt;
            &amp;lt;span class="current-page"&amp;gt;
              {currentPage}
            &amp;lt;/span&amp;gt;
            &amp;lt;button type="button" onclick={nextPage}&amp;gt;&amp;amp;gt;&amp;lt;/button&amp;gt;
          &amp;lt;/div&amp;gt;
        &amp;lt;/div&amp;gt;
      &amp;lt;/lightning-card&amp;gt;
&amp;lt;/template&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Most of this is standard HTML with some custom components. The line I want you to pay attention to most is the  tag and its record property. You’ll remember that this is the property we decorated with &lt;code&gt;@api&lt;/code&gt; in the gallery item JavaScript file. The &lt;code&gt;@api&lt;/code&gt; decoration allows us to pass in the record through this property.&lt;/p&gt;

&lt;p&gt;Next, onto the CSS file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# force-app/main/default/lwc/harvardArtMuseumGallery/harvardArtMuseumGallery.css

h1 {
  font-size: 2em;
  font-weight: bolder;
  margin-bottom: .5em;
}

.gallery-container .row {
  display: flex;
}

c-harvard-art-museum-gallery-item {
  margin: 1em;
  flex-grow: 1;
  width: calc(25% - 2em);
}

.pagination-container {
  text-align: center;
}

.pagination-container .current-page {
  display: inline-block;
  margin: 0 .5em;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I’ve saved the most interesting for last! The JavaScript file includes our pre-fetching logic and page-rolling algorithm.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# force-app/main/default/lwc/harvardArtMuseumGallery/harvardArtMuseumGallery.js

import { LightningElement, api } from "lwc";

const BASE_URL =
  "https://api.harvardartmuseums.org/object?apikey=$1&amp;amp;size=8&amp;amp;hasimage=1&amp;amp;page=$2";

export default class HarvardArtMuseumGallery extends LightningElement {
  @api harvardApiKey;

  error;
  records;
  currentPage = 1;
  pagesCache = [];

  chunkArray(array, size) {
    let result = [];
    for (let value of array) {
      let lastArray = result[result.length - 1];
      if (!lastArray || lastArray.length === size) {
        result.push([value]);
      } else {
        lastArray.push(value);
      }
    }

    return result.map((item, index) =&amp;gt; ({ value: item, index: index }));
  }

  nextPage() {
    this.currentPage++;
    this.changePage(this.currentPage);
  }

  previousPage() {
    if (this.currentPage &amp;gt; 1) {
      this.currentPage--;
      this.changePage(this.currentPage);
    }
  }

  connectedCallback() {
    this.changePage(1);
  }

  async changePage(page) {
    let lowerBound = ((page - 3) &amp;lt; 0) ? 0 : page - 3;
    const upperBound = page + 3;

    // Cache the extra pages
    const promises = [];
    for (let i = lowerBound; i &amp;lt;= upperBound; i++) {
      promises.push(this.getRecords(i));
    }

    Promise.all(promises).then(() =&amp;gt; console.log('finished caching pages'));

    // Now this.pages has all the data for the current page and the next/previous pages
    // The idea is that we will start the previous promises in order to prefrech the pages
    // and here we will wait for the current page to either be delivered from the cache or
    // the api call
    this.records = await this.getRecords(page);
  }

  async getRecords(page) {
    if (page in this.pagesCache) {
      return Promise.resolve(this.pagesCache[page]);
    }

    const url = BASE_URL.replace("$1", this.harvardApiKey).replace("$2", page);
    return fetch(url)
      .then((response) =&amp;gt; {
        if (!response.ok) {
          this.error = response;
        }

        return response.json();
      })
      .then((responseJson) =&amp;gt; {
        this.pagesCache[page] = this.chunkArray(responseJson.records, 4);
        return this.pagesCache[page];
      })
      .catch((errorResponse) =&amp;gt; {
        this.error = errorResponse;
      });
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice that we are decorating the harvardApiKey with &lt;code&gt;@api&lt;/code&gt;. This is how the &lt;code&gt;targetConfig&lt;/code&gt; property from our XML file will be injected into our component. Most of the code in this file facilitates changing pages and chunking the response so that we get rows of four gallery items. Pay attention to &lt;code&gt;changePage&lt;/code&gt; as well as &lt;code&gt;getRecords&lt;/code&gt;: this is where the magic happens. First, notice that &lt;code&gt;changePage&lt;/code&gt; calculates a range of pages from whatever the current requested page is. If the current requested page is five, then we will cache all pages from two until page eight. We then loop over the pages and create a promise for each page.&lt;/p&gt;

&lt;p&gt;Originally, I was thinking that we’d need to &lt;code&gt;await&lt;/code&gt; on the &lt;code&gt;Promise.all&lt;/code&gt; in order to avoid loading a page twice. But then I realized it is a low cost to pay in order to not wait for all of the pages to be returned from the API. So the current algorithm is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;User requests page five.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bounds are calculated as page two through page eight, and promises are created for those requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Since we aren’t waiting for the promises to return, we will again request page five and make an extra API request (but this only happens for pages that aren’t in the cache).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;So let's say that the user progresses to page six.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bounds are calculated as pages three through nine, and promises are created for those requests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Since we already have pages two through eight in the cache, and since we didn’t await on those promises, page six will immediately load from the cache while the promise for page nine is being fulfilled (since it is the only page missing from the cache).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;And there you have it! We’ve explored concurrency and parallelism. We learned how to build an async/await flow in serial (which you should never do). We then upgraded our serial flow to be in parallel and learned how to wait for all the promises to resolve before continuing. Finally, we’ve built a Lightning Web Component for the Harvard Art Museum using async/await and &lt;code&gt;Promise.all&lt;/code&gt;. (Although in this case, we didn't need the &lt;code&gt;Promise.all&lt;/code&gt; since the algorithm works better if we don't wait for all the promises to resolve before continuing on.)&lt;/p&gt;

&lt;p&gt;Thanks for reading and feel free to leave any comments and questions below.&lt;/p&gt;

&lt;p&gt;Citations:&lt;/p&gt;

&lt;p&gt;[1] &lt;a href="https://stackoverflow.com/questions/1050222/what-is-the-difference-between-concurrency-and-parallelism" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/1050222/what-is-the-difference-between-concurrency-and-parallelism&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[2] &lt;a href="https://github.com/harvardartmuseums/api-docs" rel="noopener noreferrer"&gt;https://github.com/harvardartmuseums/api-docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[3] &lt;a href="https://trailhead.salesforce.com/content/learn/projects/quick-start-lightning-web-components/set-up-salesforce-dx" rel="noopener noreferrer"&gt;https://trailhead.salesforce.com/content/learn/projects/quick-start-lightning-web-components/set-up-salesforce-dx&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[4] &lt;a href="https://trailhead.salesforce.com/content/learn/projects/quick-start-lightning-web-components/create-a-hello-world-lightning-web-component" rel="noopener noreferrer"&gt;https://trailhead.salesforce.com/content/learn/projects/quick-start-lightning-web-components/create-a-hello-world-lightning-web-component&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[5] &lt;a href="https://trailhead.salesforce.com/sample-gallery" rel="noopener noreferrer"&gt;https://trailhead.salesforce.com/sample-gallery&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[6] &lt;a href="https://developer.salesforce.com/docs/component-library/overview/components" rel="noopener noreferrer"&gt;https://developer.salesforce.com/docs/component-library/overview/components&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[7] &lt;a href="https://github.com/bloveless/AsyncAwaitPromiseAllLWC" rel="noopener noreferrer"&gt;https://github.com/bloveless/AsyncAwaitPromiseAllLWC&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[8] &lt;a href="https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_lightningcomponentbundle.htm" rel="noopener noreferrer"&gt;https://developer.salesforce.com/docs/atlas.en-us.api_meta.meta/api_meta/meta_lightningcomponentbundle.htm&lt;/a&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>concurrency</category>
      <category>lightningwebcomponents</category>
    </item>
    <item>
      <title>Logging Best Practices: Part 2</title>
      <dc:creator>Brennon Loveless</dc:creator>
      <pubDate>Wed, 03 Feb 2021 13:00:48 +0000</pubDate>
      <link>https://dev.to/bloveless/logging-best-practices-part-2-3916</link>
      <guid>https://dev.to/bloveless/logging-best-practices-part-2-3916</guid>
      <description>&lt;h2&gt;
  
  
  Best Practices for Logging
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://dev.to/bloveless/logging-v-monitoring-part-1-47lk"&gt;part one&lt;/a&gt; I discussed why monitoring matters and some ways to implement that. Now let’s talk about some best practices we can implement to make monitoring easier. Let’s start with some best practices for logging — formatting, context, and level.&lt;/p&gt;

&lt;p&gt;First, be sure you &lt;a href="https://www.loomsystems.com/blog/single-post/2017/01/26/9-logging-best-practices-based-on-hands-on-experience"&gt;&lt;strong&gt;“log a lot and then log some more.&lt;/strong&gt;”&lt;/a&gt; Log everything you might need in both the happy path and error path since you’ll only be armed with these logs when another error occurs in the future.&lt;/p&gt;

&lt;p&gt;Until recently, I didn’t think I needed as many logs in the happy path. Meanwhile, my error path is full of helpful logging messages. Here is one example that just happened to me this week. I had some code that would read messages from a Kafka topic, validate them, and then pass them off to the DB to be persisted. Well, I forgot to actually push the message into the validated-messages array, which resulted in it always being empty. My point here is that everything was part of the happy path, so there weren’t any error logs for me to check. It took me a full day of adding logging and enabling debugging in production to find my mistake (that I forgot to push to the array). If I had messages like “Validating 1000 messages” and “Found 0 valid messages to be persisted,” it would have been immediately obvious that none of my messages were making it through. I could have solved it in an hour if I had “logged a lot and then logged some more.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Formatting
&lt;/h2&gt;

&lt;p&gt;This is another logging tip that I had taken for granted until recently. The format of your log messages matters…. and it matters a lot.&lt;/p&gt;

&lt;p&gt;People use &lt;a href="https://hackernoon.com/log-everything-as-json-hmq32ax"&gt;JSON-formatted logs&lt;/a&gt; more and more these days and I’m starting to lean into it myself. After all, there are many benefits to using JSON as your logging format. That said, if you pick a different log format, stick to it across all your systems and services. One of the major JSON-format benefits is that it is super easy to have generic error messages, and then add additional data/context. For example. . .&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "message": "Validating messages", 
  "message_count": 1000
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;or&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "message": "Persisting messages", 
  "message_count": 0
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These messages are harder for humans to read, but easy to group, filter, and read for machines. In the end, we want to push as much processing onto the machine as possible anyway!&lt;/p&gt;

&lt;p&gt;Another tip about your actual log message: In many cases, you’ll be looking to find similar events that occurred. Maybe you found an error and you want to know how many times it occurred over the last seven days. If the error message is something like “System X failed because Z &amp;gt; Y” — where X, Y, and Z are all changing between each error message — then it will be difficult to classify those errors as the same.&lt;/p&gt;

&lt;p&gt;To solve this, use a general message for the actual log message so you can search by the exact error wording. For example: “This system failed because there are more users than there are slots available.” Within the context of the log message, you can attach all the variables specific to this current failure.&lt;/p&gt;

&lt;p&gt;This does require you to have an advanced-enough logging framework to attach context. But if you are using JSON for your log messages, then you could have the “message” field be the same string for every event; any other context would appear as additional fields in the JSON blob. That way, grouping messages is easy, and specific error data is still logged. Although, if you are using a JSON format, then I’d suggest that you have a “message” and a “display.” That way, you get the best of both worlds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context
&lt;/h2&gt;

&lt;p&gt;Rarely does a single log message paint the entire picture; including additional context with it will pay off. There is nothing more frustrating than when you get an alert saying “All your base are belong to us” and you have no idea what bases are missing or who “us” is referencing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ydDgEC7s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2484/0%2AifKkc7eVEApwn9lr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ydDgEC7s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2484/0%2AifKkc7eVEApwn9lr.jpg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whenever you are writing a log message, imagine receiving it at 1am. Include all the relevant information your sleepy self would need to look into the issue as quickly as possible. You may also choose to log a transaction ID as part of your context. We’ll chat about those later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Level
&lt;/h2&gt;

&lt;p&gt;Always use the correct level when writing your log messages. Ideally, your team will have different uses for the different log levels. Make sure you and your team are logging at the agreed-upon level when writing messages.&lt;/p&gt;

&lt;p&gt;Some examples are INFO for general system state and probably happy-path code, ERROR for exceptions and non-happy-path code, WARN for things that might cause errors later or are approaching a limit, DEBUG for everything else. Obviously, these are just how I use some of the log levels. Try and lay out a log-level strategy with your team and stick to it.&lt;/p&gt;

&lt;p&gt;Also, ensure that whatever logging aggregator you use allows for filtering by specific log levels or groups of log levels. When you view the state of your system, you probably don’t care about DEBUG level logs and want to just search for everything INFO and above, for example.&lt;/p&gt;

&lt;h2&gt;
  
  
  Log Storage
&lt;/h2&gt;

&lt;p&gt;In order for your logs to be accessible, you’ll need to store them somewhere. These days, it is unlikely that you’ll have a single log file that represents your entire system. Even if you have a monolithic application, you likely host it on more than one server. As such, you’ll need a system that can aggregate all these log files.&lt;/p&gt;

&lt;p&gt;I prefer to store my logs in Elasticsearch, but if you are in another ecosystem like &lt;a href="https://www.heroku.com"&gt;Heroku&lt;/a&gt;, then you can use one of the provided &lt;a href="https://elements.heroku.com/addons/categories/logging"&gt;logging add-ons&lt;/a&gt;. There are even some free ones to get you started.&lt;/p&gt;

&lt;p&gt;You may also prefer third-party logging services like Splunk or Datadog to ship your logs and monitor, analyze, and alert from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Filtering
&lt;/h2&gt;

&lt;p&gt;If you have logged all your messages at the correct levels and have used easily group-able log messages, then filtering becomes simple in any system configuration. Writing a query in Elasticsearch will be so much simpler when you’ve planned your log messages with this in mind.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transaction IDs
&lt;/h2&gt;

&lt;p&gt;Let’s face it: Gone are the days when a single service handled the full request path. Only in rare cases or demo projects will your services be completely isolated from other services. Even something as simple as a front-end and a separate backend API can benefit from having transaction IDs. The idea is that you generate a transaction ID (which can be as simple as a UUID) as early as possible in your request path. That transaction ID gets passed through every request and stored with the data in whichever systems store it. This way, when there is an error four of five levels deep in your system, you can trace that request back to when the user first clicked the button. Using transaction IDs makes it easier to bridge the gap between systems. If you see an error in InfluxDB, then you can use the transaction ID to find any related messages in Elasticsearch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other interesting metrics
&lt;/h2&gt;

&lt;p&gt;Just recording log messages probably won’t provide the whole picture of your system. Here are a few more metrics that may interest you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Throughput
&lt;/h2&gt;

&lt;p&gt;Keeping track of how quickly your system processes a batch of messages — or finishes some job — can easily illuminate subtler errors. You may also be able to detect errors or slowness in your downstream systems by using throughput monitoring. Maybe a database is acting slower than usual, or your database switched to an inefficient query plan. Well, throughput monitoring is a great way to detect these types of errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Success vs Error
&lt;/h2&gt;

&lt;p&gt;Of course, no system will ever have a 100% success rate. Maybe you expect your system to return a success error code at least 95% of the time. Logging your response codes will help you gauge if your expected success rates are dropping.&lt;/p&gt;

&lt;h2&gt;
  
  
  Response Times
&lt;/h2&gt;

&lt;p&gt;The last interesting metric I’ll discuss is response times. Especially when you’ve got a bunch of developers all pushing to a single code base, it is difficult to realize when you’ve impacted the response times of another endpoint. Capturing the overall response time of every request may give you the insight necessary to realize when response times increase. If you catch it early enough, it may not be hard to identify the commit that caused the issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, I’ve talked about the differences between logging and monitoring and why they are both necessary in a robust system. I’ve talked about some monitoring practices as well as some monitoring tools I like using. We experimented with a system and learned how to install and set up some monitoring tools for that system. Finally, I talked about some logging best practices that will make your life much easier and how better logging will make your monitoring tools much more useful.&lt;/p&gt;

&lt;p&gt;If you have any questions, comments, or suggestions please leave them in the comments below and together we can all implement better monitors and build more reliable systems!&lt;/p&gt;

</description>
      <category>logging</category>
      <category>bestpractices</category>
      <category>monitoring</category>
      <category>tipsandtricks</category>
    </item>
    <item>
      <title>Logging V Monitoring: Part 1</title>
      <dc:creator>Brennon Loveless</dc:creator>
      <pubDate>Wed, 03 Feb 2021 12:56:27 +0000</pubDate>
      <link>https://dev.to/bloveless/logging-v-monitoring-part-1-47lk</link>
      <guid>https://dev.to/bloveless/logging-v-monitoring-part-1-47lk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;What do you do when your application is down? Better yet: How can you &lt;em&gt;predict&lt;/em&gt; when your application may go down? How do you begin an investigation in the most efficient way possible and resolve issues quickly?&lt;/p&gt;

&lt;p&gt;Understanding the difference between logging and monitoring is critical, and can make all the difference in your ability to trace issues back to their root cause. If you confuse the two or use one without the other, you’re setting yourself up for long nights and weekends debugging your app.&lt;/p&gt;

&lt;p&gt;In this article, we’ll look at how to effectively log and monitor your systems. I’ll tell you about a few good practices that I’ve learned over the years and some interesting metrics that you may want to monitor in your systems. Finally, I’ll show you a small web application that had no monitoring, alerting, or logging. I’ll demonstrate how I fixed the logging and how I’ve implemented monitoring and alerting around those logs.&lt;/p&gt;

&lt;p&gt;Everyone has some sort of logging in their applications, even if it’s just writing to a file to review later. By the end of this article, I hope to convince you that logging without monitoring is about as good as no logging at all. Along the way, we can review some best practices for becoming a better logger.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging vs Monitoring
&lt;/h2&gt;

&lt;p&gt;For a while, I conflated logging and monitoring. At least, I thought they were two sides of the same coin. I hadn’t considered how uniquely necessary they each were, and how they supported each other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logging&lt;/strong&gt; tells you &lt;em&gt;what&lt;/em&gt; happened, and gives you the raw data to track down the issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; tells you &lt;em&gt;how&lt;/em&gt; your application is behaving and can alert you when there are issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Can’t Have One Without the Other
&lt;/h2&gt;

&lt;p&gt;Let’s consider a system that has fantastic logging but no monitoring. It’s obvious why this doesn’t work. No matter how good our logs are, I guarantee that nobody actively reads them — especially when our logs get verbose or use formatting like JSON. It is impractical to assume that someone will comb all those logs and look for errors. Maybe when we have a small set of beta users, we can expect them to report every error so we can go back and look at what happened. But what if we have a million users? We can’t expect every one of those users to report each error they encounter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XVH0WIuR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4240/0%2AkRs3ZGiGshMrMJYE.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XVH0WIuR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4240/0%2AkRs3ZGiGshMrMJYE.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is where monitoring comes in. We need to put the systems in place that can do the looking up and coordinating for us. We need a system that will let us know when an error happens and, if it is good enough, why that error occurred.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring
&lt;/h2&gt;

&lt;p&gt;Let’s begin by talking about monitoring goals and what makes a great monitoring system. First, our system must be able to notify us when it detects errors. Second, we should be able to create alerts based on the needs of our system.&lt;/p&gt;

&lt;p&gt;We want to lay out the specific types of events that will determine if our system is performing correctly or not. You may want to be alerted about every error that gets logged. Alternatively, you may be more interested in how fast your system responds in cases. Or, you might be focused on whether your error rates are normal or increasing. You may also be interested in security monitoring and what solution suits your cases. For some additional examples of things to monitor, I’d suggest you check out a great article written by Heroku &lt;a href="https://devcenter.heroku.com/articles/logging-best-practices-guide?preview=1#example-logging-use-cases"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One final thing to consider is how our monitoring system can point us toward solutions. This will vary greatly depending on your application; still, it is something to consider when picking your tools.&lt;/p&gt;

&lt;p&gt;Speaking of tools, here are some of my favorite tools to use when I’m monitoring an application. I’m sure there are more specific ones out there. If you’ve got some tools that you really love, then feel free to leave them in the comments!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Elasticsearch&lt;/strong&gt;: This is where I store my logs. It lets me set up monitors and alerts in Grafana based on log messages. With Elasticsearch, I can also do full-text searches when I’m trying to find an error’s cause.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kibana&lt;/strong&gt;: This lets me easily perform live queries against Elasticsearch to assist in debugging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Grafana&lt;/strong&gt;: Here, I create dashboards that provide high-level overviews of my applications. I also use Grafana for its alerting system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4OVAloHr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AovRTA5ZglVH4dcmM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4OVAloHr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2AovRTA5ZglVH4dcmM.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;InfluxDB&lt;/strong&gt;: This time-series database records things like response times, response codes, and any interesting point-in-time data (like success vs error messages within a batch).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pushover&lt;/strong&gt;: When working as a single engineer in a project, Pushover gives me a simple and cheap notification interface. It directly pushes a notification to my phone whenever an alert is triggered. Grafana also has native support for Pushover, so I only have to put in a few API keys and I am ready to go.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PagerDuty&lt;/strong&gt;: If you are working on a larger project or with a team, then I would suggest &lt;a href="https://www.pagerduty.com"&gt;PagerDuty&lt;/a&gt;. With it, you can schedule specific times when different people (like individuals on your team) receive notifications. You can also create escalation policies in case someone can’t respond quickly enough. Again, Granafa offers native support for PagerDuty.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Heroku&lt;/strong&gt;: There are other monitoring best practices in this &lt;a href="https://devcenter.heroku.com/articles/logging-best-practices-guide?preview=1"&gt;article from Heroku&lt;/a&gt;. If you are within the Heroku ecosystem, then you can look at their &lt;a href="https://elements.heroku.com/addons#logging"&gt;logging add-ons&lt;/a&gt; (most of which include alerting).&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring Example Project
&lt;/h2&gt;

&lt;p&gt;Let’s look at an example project: a Kubernetes-powered web application behind an NGINX proxy, whose log output and response codes/times we want to monitor. If you aren’t interested in the implementation of these tools, feel free to skip to the next section.&lt;/p&gt;

&lt;p&gt;Kubernetes automatically writes all logs to stderr and stdout to files on the file system. We can monitor these logs easily, so long as our application correctly writes logs to these streams. As an aside, it is also possible to send your log files directly to Elasticsearch from your application. But for our example project, we want the lowest barrier to entry.&lt;/p&gt;

&lt;p&gt;Now that our application is writing logs to the correct locations, let’s set up Elasticsearch, Kibana, and Filebeat to collect the output from the container. Additional and more up-to-date information can be found on the &lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html"&gt;Elastic Cloud Quickstart page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First, we &lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html"&gt;deploy the Elastic Cloud Operator&lt;/a&gt; and RBAC rules.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://download.elastic.co/downloads/eck/1.3.1/all-in-one.yaml

# Monitor the output from the operator
kubectl -n elastic-system logs -f statefulset.apps/elastic-operator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, let’s actually &lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html"&gt;deploy the Elasticsearch cluster&lt;/a&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.10.2
  nodeSets:
  - name: default
    count: 1
    config:
      node.store.allow_mmap: false
EOF

# Wait for the cluster to go green
kubectl get elasticsearch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now that we have an Elasticsearch cluster, let’s &lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html"&gt;deploy Kibana&lt;/a&gt; so we can visually query Elasticsearch.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: quickstart
spec:
  version: 7.10.2
  count: 1
  elasticsearchRef:
    name: quickstart
EOF

# Get information about the kibana deployment
kubectl get kibana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Review &lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html"&gt;this page&lt;/a&gt; for more information about accessing Kibana.&lt;/p&gt;

&lt;p&gt;Finally, we’ll add FileBeat, &lt;a href="https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-quickstart.html"&gt;using this guide&lt;/a&gt;, to monitor the Kubernetes logs and ship them to Elasticsearch.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
  name: quickstart
spec:
  type: filebeat
  version: 7.10.2
  elasticsearchRef:
    name: quickstart
  config:
    filebeat.inputs:
    - type: container
      paths:
      - /var/log/containers/*.log
  daemonSet:
    podTemplate:
      spec:
        dnsPolicy: ClusterFirstWithHostNet
        hostNetwork: true
        securityContext:
          runAsUser: 0
        containers:
        - name: filebeat
          volumeMounts:
          - name: varlogcontainers
            mountPath: /var/log/containers
          - name: varlogpods
            mountPath: /var/log/pods
          - name: varlibdockercontainers
            mountPath: /var/lib/docker/containers
        volumes:
        - name: varlogcontainers
          hostPath:
            path: /var/log/containers
        - name: varlogpods
          hostPath:
            path: /var/log/pods
        - name: varlibdockercontainers
          hostPath:
            path: /var/lib/docker/containers
EOF

# Wait for the beat to go green
kubectl get beat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Since our application uses NGINX as a proxy, we can use &lt;a href="https://github.com/influxdata/nginx-influxdb-module"&gt;this wonderful module&lt;/a&gt; to write the response codes and times to InfluxDB.&lt;/p&gt;

&lt;p&gt;Next, you can follow &lt;a href="https://github.com/grafana/helm-charts/blob/main/charts/grafana/README.md"&gt;this guide&lt;/a&gt; to get Grafana running in your Kubernetes cluster. After that, &lt;a href="https://grafana.com/docs/grafana/latest/datasources/"&gt;set up the two data sources&lt;/a&gt; we are using: InfluxDB and Elasticsearch.&lt;/p&gt;

&lt;p&gt;Finally, set up whatever &lt;a href="https://grafana.com/docs/grafana/latest/alerting/notifications/"&gt;alert channel notifiers&lt;/a&gt; you wish to use. In my case, I’d use Pushover since I’m just one developer. You may be more interested in something like &lt;a href="https://www.pagerduty.com/"&gt;PagerDuty&lt;/a&gt; if you need a fully-featured notification channel.&lt;/p&gt;

&lt;p&gt;And there you have it! We’ve got an application — one we can set up dashboards and alerts for using Grafana.&lt;/p&gt;

&lt;p&gt;This setup can notify us about all sorts of issues. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We detected any ERROR level logs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We are receiving too many error response codes from our system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We are noticing our application responding slower than usual.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We did all this without making many changes to our application; and yet, we now have a lot of tools available to us. We can now instrument our code to record interesting points in time using InfluxDB. For example, if we received a batch of 500 messages and 39 of them were unable to be parsed, we can post a message to InfluxDB telling us that we received 461 valid messages and 39 invalid messages. We can then set up an alert in Grafana to let us know if that ratio of valid to invalid messages spikes.&lt;/p&gt;

&lt;p&gt;Essentially, anything that is interesting to code should be interesting to monitor; now, we have the tools necessary to monitor anything interesting in our application.&lt;/p&gt;

&lt;p&gt;As a small bonus, here is a Pushover alert that I received from a setup similar to the one described above. I accidentally took down my father’s website during an experiment and this was the result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SZSg8jZN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5200/0%2AmFFZkyHz_xp4uQ79.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SZSg8jZN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/5200/0%2AmFFZkyHz_xp4uQ79.jpeg" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, I’ll give you a break to digest everything I’ve talked about. In &lt;a href="https://dev.to/bloveless/logging-best-practices-part-2-3916"&gt;part two&lt;/a&gt; I’ll be discussing some logging best practices.&lt;/p&gt;

</description>
      <category>logging</category>
      <category>monitoring</category>
      <category>grafana</category>
      <category>alerts</category>
    </item>
    <item>
      <title>Adding IoT to my home office desk: Part 2</title>
      <dc:creator>Brennon Loveless</dc:creator>
      <pubDate>Thu, 19 Nov 2020 22:13:32 +0000</pubDate>
      <link>https://dev.to/bloveless/adding-iot-to-my-home-office-desk-part-2-2fcd</link>
      <guid>https://dev.to/bloveless/adding-iot-to-my-home-office-desk-part-2-2fcd</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/bloveless/adding-iot-to-my-home-office-desk-part-1-28nc"&gt;part one&lt;/a&gt; I discussed the first version/Bluetooth version of my desk upgrade.&lt;/p&gt;

&lt;p&gt;In this article, I’ll discuss upgrading the desk to use Google Smart Home so I can control my desk with my voice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WiFi and Google Smart Home&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Adding WiFi to the desk was actually pretty simple. I swapped out the microcontroller from the Nordic NRF52 to an ESP32 since the ESP32 has WiFi built in. Most of the control software was portable since it was written in C++, and both devices could be programmed with &lt;a href="http://platform.io/"&gt;Platform.IO&lt;/a&gt; and the Arduino libraries, including my own &lt;a href="https://github.com/bloveless/tfmini-s"&gt;tfmini-s&lt;/a&gt; library that I wrote to measure the current height of the desk.&lt;/p&gt;

&lt;p&gt;Here is the necessary system architecture to get my desk to talk to Google. Let’s first talk about the interaction between myself and Google.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Crmn0n7Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2mgba2vmy3ox8fxslne5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Crmn0n7Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2mgba2vmy3ox8fxslne5.jpg" alt="Full architecture/technology diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Full architecture/technology diagram&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, the desk was now WiFi enabled, then it was time to figure out how to interface with Google Smart Home. Google Smart Home is controlled through &lt;a href="https://developers.google.com/assistant/smarthome/develop/create"&gt;Smart Home Actions&lt;/a&gt;. What is interesting about Smart Home actions is that your service acts as the OAuth2 server and not as a client. Most of the work that I put into the server was related to implementing the OAuth2 Node.js Express app, which will eventually find its way up to Heroku and act as the proxy between Google and my desk.&lt;/p&gt;

&lt;p&gt;I was lucky enough that there is a decent implementation of a server through two libraries. The first was the underlying server implementation, called node-oauth2-server and found &lt;a href="https://oauth2-server.readthedocs.io/en/latest/"&gt;here&lt;/a&gt;. The second was the adapter to hook the library up to express, called express-oauth-server and found &lt;a href="https://github.com/oauthjs/express-oauth-server"&gt;here&lt;/a&gt;. The example in the GitHub repo for the adapter left a lot to be desired and didn’t really work. It took me a while to reverse engineer how to use the two libraries together. Now I have a decent model that supports registering accounts, refreshing tokens, and validating tokens. The following code snippet shows all the functions that are necessary for the OAuth2 server but you can see the full file &lt;a href="https://github.com/bloveless/esp32-iot-desk-server/blob/main/model.js"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { Pool } = require("pg");
const crypto = require("crypto");
const pool = new Pool({
   connectionString: process.env.DATABASE_URL
});

module.exports.pool = pool;
module.exports.getAccessToken = (bearerToken) =&amp;gt; {...};
module.exports.getClient = (clientId, clientSecret) =&amp;gt; {...};
module.exports.getRefreshToken = (bearerToken) =&amp;gt; {...};
module.exports.getUser = (email, password) =&amp;gt; {...};
module.exports.getUserFromAccessToken = (token) =&amp;gt; {...};
module.exports.getDevicesFromUserId = (userId) =&amp;gt; {...};
module.exports.getDevicesByUserIdAndIds = (userId, deviceIds) =&amp;gt; {...};
module.exports.setDeviceHeight = (userId, deviceId, newCurrentHeight) =&amp;gt; {...};
module.exports.createUser = (email, password) =&amp;gt; {...};
module.exports.saveToken = (token, client, user) =&amp;gt; {...};
module.exports.saveAuthorizationCode = (code, client, user) =&amp;gt; {...};
module.exports.getAuthorizationCode = (code) =&amp;gt; {...};
module.exports.revokeAuthorizationCode = (code) =&amp;gt; {...};
module.exports.revokeToken = (code) =&amp;gt; {...};

Next is setting up the actual express app. Below are the endpoints necessary for the OAuth server but you can read the full file here.

const express = require("express");
const OAuth2Server = require("express-oauth-server");
const bodyParser = require("body-parser");
const cookieParser = require("cookie-parser");
const flash = require("express-flash-2");
const session = require("express-session");
const pgSession = require("connect-pg-simple")(session);
const morgan = require("morgan");

const { google_actions_app } = require("./google_actions");
const model = require("./model");
const { getVariablesForAuthorization, getQueryStringForLogin } = require("./util");
const port = process.env.PORT || 3000;

// Create an Express application.
const app = express();
app.set("view engine", "pug");
app.use(morgan("dev"));

// Add OAuth server.
app.oauth = new OAuth2Server({
   model,
   debug: true,
});

// Add body parser.
app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
app.use(express.static("public"));

// initialize cookie-parser to allow us access the cookies stored in the browser.
app.use(cookieParser(process.env.APP_KEY));

// initialize express-session to allow us track the logged-in user across sessions.
app.use(session({...}));

app.use(flash());

// This middleware will check if user's cookie is still saved in browser and user is not set, then automatically log the user out.
// This usually happens when you stop your express server after login, your cookie still remains saved in the browser.
app.use((req, res, next) =&amp;gt; {...});

// Post token.
app.post("/oauth/token", app.oauth.token());

// Get authorization.
app.get("/oauth/authorize", (req, res, next) =&amp;gt; {...}, app.oauth.authorize({...}));

// Post authorization.
app.post("/oauth/authorize", function (req, res) {...});
app.get("/log-in", (req, res) =&amp;gt; {...});
app.post("/log-in", async (req, res) =&amp;gt; {...});
app.get("/log-out", (req, res) =&amp;gt; {...});
app.get("/sign-up", async (req, res) =&amp;gt; {...});
app.post("/sign-up", async (req, res) =&amp;gt; {...});
app.post("/gaction/fulfillment", app.oauth.authenticate(), google_actions_app);
app.get('/healthz', ((req, res) =&amp;gt; {...}));
app.listen(port, () =&amp;gt; {
   console.log(`Example app listening at port ${port}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is quite a bit of code there but I’ll explain the highlights. The two routes that are used for the OAuth2 server are /oauth/token and /oauth/authorize. These are used for getting a new token or refreshing expired tokens. Next is getting the server to respond to the actual Google Action. You’ll notice that the /gaction/fulfillment endpoint points to a &lt;code&gt;google_actions_app&lt;/code&gt; object. Google sends requests to your server in a specific format and provides a library to help you process those requests. Below are the functions necessary to communicate with Google but you can view the entire file &lt;a href="https://github.com/bloveless/esp32-iot-desk-server/blob/main/google_actions.js"&gt;here&lt;/a&gt;. Finally, there is a /healthz endpoint that I’ll talk about at the end of this article.&lt;/p&gt;

&lt;p&gt;The /gaction/fulfillment endpoint uses a middleware called app.oauth.authenticate() and all of my hard work getting the OAuth2 server working was so that this middleware would work. This middleware validates that the Bearer token that Google provides us references a valid user and has not expired. Next, the route sends the request and response to a &lt;code&gt;google_actions_app&lt;/code&gt; object. Google sends requests to your server in a specific format and provides a library to help you parse and process those requests. Below are the functions necessary to communicate with Google but you can view the entire file &lt;a href="https://github.com/bloveless/esp32-iot-desk-server/blob/main/google_actions.js"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { smarthome } = require('actions-on-google');
const mqtt = require('mqtt');
const mqtt_client = mqtt.connect(process.env.CLOUDMQTT_URL);

const model = require('./model');
const { getTokenFromHeader } = require('./util');

mqtt_client.on('connect', () =&amp;gt; {
   console.log('Connected to mqtt');
});

const updateHeight = {
   "preset one": (deviceId) =&amp;gt; {
       mqtt_client.publish(`/esp32_iot_desk/${deviceId}/command`, "1");
   },
   "preset two": (deviceId) =&amp;gt; {
       mqtt_client.publish(`/esp32_iot_desk/${deviceId}/command`, "2");
   },
   "preset three": (deviceId) =&amp;gt; {
       mqtt_client.publish(`/esp32_iot_desk/${deviceId}/command`, "3");
   },
};

const google_actions_app = smarthome({...});
google_actions_app.onSync(async (body, headers) =&amp;gt; {...});
google_actions_app.onQuery(async (body, headers) =&amp;gt; {...});
google_actions_app.onExecute(async (body, headers) =&amp;gt; {...});
module.exports = { google_actions_app };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you add a Smart Action to your Google account, Google will then perform a sync request. This request lets Google know what devices your account has access to. The next is a query request which is how Google queries your devices to determine their current state.&lt;/p&gt;

&lt;p&gt;When you first add a Google Action to your Smart Home account, you’ll notice that Google first sends a sync request and then a query request to get the holistic view of your devices. The final request is an execute request which is how Google tells your devices to actually do things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Smart Home Device Traits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Google uses device traits to control your devices. Google uses these traits to provide UI elements on your Google devices as well as to build communication patterns for voice control. Some of the traits include ColorSetting, Modes, OnOff, and StartStop. It took me a while to decide which trait would work best for my application, but I later selected on Modes.&lt;/p&gt;

&lt;p&gt;You can think of modes as a drop down where you can select one of N predefined values, or height presets, in my case. I called my Mode "height" and the possible values are "preset one", "preset two", and "preset three". This allows me to control my desk by saying "Hey Google, set my desk height to preset one," and Google will send the appropriate execute request to my system. You can read more about Google device traits &lt;a href="https://developers.google.com/assistant/smarthome/traits"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Off To Production&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, Google Smart Home and my computer were communicating. Up until this point, I was using &lt;a href="https://ngrok.com/"&gt;ngrok&lt;/a&gt; to run my express server locally. Now that I finally had my server working well enough it was time to make it accessible to Google at all times. This meant using Heroku to host my app. &lt;a href="https://www.heroku.com/"&gt;Heroku&lt;/a&gt; is a PaaS provider that makes it easy to deploy and manage applications.&lt;/p&gt;

&lt;p&gt;One of the major benefits of using Heroku is the add-ons. Heroku add-ons made it super simple to add a CloudMQTT and Postgres server to my application. Another benefit of using Heroku is how simple it is to build and deploy. Heroku automatically detects what code you are using and builds/deploys your code for you. You can find more information about this by reading about &lt;a href="https://devcenter.heroku.com/articles/buildpacks"&gt;Heroku Buildpacks&lt;/a&gt;. In my case whenever I push code to the Heroku git remote it will install all of my packages, strip out any development dependencies, and deploy my application all by simply issuing "git push heroku main".&lt;/p&gt;

&lt;p&gt;With a few clicks I had CloudMQTT and Postgres available to my app, and I only needed to use a few environment variables to integrate those services with my application. Everything I’ve done on Heroku was built for free. However, CloudMQTT is a third-party add-on and costs $5/month.&lt;/p&gt;

&lt;p&gt;I believe the need for Postgres is self explanatory but CloudMQTT deserves a little more explanation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From The Internet to a Private Network, The Hard Way&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are a few ways to expose an application, or in my case an IoT device, to the internet. The first is to open a port in my home network to expose the device to the internet. In this case, my Heroku Express app would post a request to my device by using a public IP address. This would require me to have a public static IP address as well as a static IP address for my ESP32. My ESP32 would also have to act as an HTTP server and be listening all the time for instructions from Heroku. This is a lot of overhead for a device that will only receive instructions a few times a day.&lt;/p&gt;

&lt;p&gt;The second way is called "hole-punching". This is how you can use a third-party external server to expose a device to the internet without having to use port-forwarding. Your device basically connects to the server, which establishes an open port. Then, the other service can connect directly to your internal device by retrieving the open port from the external server. Finally, it connects directly to the device using that open port. (This may or may not be entirely correct since I only read part of a paper about it.)&lt;/p&gt;

&lt;p&gt;A lot goes into "hole-punching" and I don’t fully understand it. However, if you are curious, there are some interesting articles that explain it more. These are the two articles that I read to better understand "hole-punching": &lt;a href="https://en.wikipedia.org/wiki/Hole_punching_(networking)"&gt;Wikipedia&lt;/a&gt; and a &lt;a href="https://bford.info/pub/net/p2pnat/"&gt;paper from MIT written by Bryan Ford et al&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From The Internet to a Private Network, The IoT Way&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I wasn’t very happy with those two solutions. I’ve added many smart devices to my home and I’ve never had to open a port on my router so port forwarding was out. Also, hole-punching seems far more difficult than what I’m looking to implement and is better suited for P2P networks. Through further research I discovered MQTT and found out that it is the protocol for IoT. It has some benefits like low power, configurable resiliency, and it doesn’t require port forwarding. MQTT is a publisher/subscriber protocol which means that the desk is a subscriber of a specific topic and the Heroku app is a publisher to that same topic.&lt;/p&gt;

&lt;p&gt;So Google communicates with Heroku, that request is parsed to determine the requested device and what its new state/mode is to be. Then, the Heroku app publishes a message to the CloudMQTT server, deployed as an add-on on Heroku, telling the desk to go to a new preset. Finally, the desk subscribes to a topic and receives the message that the Heroku app published, and the desk adjusts its height to match the request! You’ll notice in the google_actions_app file that there is an updateHeight function which publishes a single number to an MQTT topic for a specific device ID. This is how the Heroku app publishes to MQTT asking the desk to move.&lt;/p&gt;

&lt;p&gt;The final step is to receive the message on the ESP32 and move the desk. I’ll show some highlights of the desk code below but the full source code is &lt;a href="https://github.com/bloveless/esp32-iot-desk-mqtt/blob/master/src/main.cpp"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void setup()
{
 Serial.begin(115200);
...
 tfminis.begin(&amp;amp;Serial2);
 tfminis.setFrameRate(0);

...

 state_machine = new StateMachine();
 state_machine-&amp;gt;begin(*t_desk_height, UP_PWM_CHANNEL, DOWN_PWM_CHANNEL);

 setup_wifi();

 client.setServer(MQTT_SERVER_DOMAIN, MQTT_SERVER_PORT);
 client.setCallback(callback);
...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the desk is booted up we first begin communication between the TFMini-S, which is a distance sensor, to get the current desk height. We then set up a state machine for the actual desk movement. The state machine receives commands through MQTT and then is responsible for aligning the user’s request with the actual height of the desk read from the distance sensor. Finally, we connect to the WiFi network, connect to the MQTT server, and configure the callback for any data we receive on the MQTT topic we are subscribed to. I’ll show the callback function next.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void callback(char *topic, byte *message, unsigned int length)
{
 ...

 String messageTemp;

 for (int i = 0; i &amp;lt; length; i++)
 {
   messageTemp += (char)message[i];
 }

 if (messageTemp == "1") {
   state_machine-&amp;gt;requestStateChange(ADJUST_TO_PRESET_1_HEIGHT_STATE);
 }

 if (messageTemp == "2") {
   state_machine-&amp;gt;requestStateChange(ADJUST_TO_PRESET_2_HEIGHT_STATE);
 }

 if (messageTemp == "3") {
   state_machine-&amp;gt;requestStateChange(ADJUST_TO_PRESET_3_HEIGHT_STATE);
 }
...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The state machine registers a state change received on the MQTT topic. Then, the state machine in the main loop processes the new state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void loop()
{
 if (!client.connected())
 {
   reconnect();
 }
 client.loop();
 state_machine-&amp;gt;processCurrentState();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main loop does a few things: First, it reconnects to the MQTT server if it wasn’t already connected. Then, it processes any data it received on the subscribed MQTT topic. Finally, it works to put the desk into the correct location according to the state requested over the MQTT topic.&lt;/p&gt;

&lt;p&gt;There you have it! My desk is completely voice-controlled and communicating with Google to receive commands!&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/auTg7ZkHjBM"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Notes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The last endpoint that I didn’t discuss is the /healthz endpoint. This is because Google expects you to respond rather quickly and booting up a Heroku application upon every request wasn’t going to work for me. I set up a ping service to ping the /healthz endpoint every minute to keep the service alive and ready to respond on Heroku. If you plan on doing something like this, then remember that this will use up all of your available free dyno hours. This is fine for now since this is the only app I'm running on Heroku. Alternatively, for $7/month, you can upgrade to &lt;a href="https://www.heroku.com/pricing"&gt;Heroku’s Hobby plan&lt;/a&gt;, which keeps the app "alive".&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Building an IoT device has a lot of overhead to get started. I constructed the actual hardware, built the control circuitry, set up an MQTT server, built an Express OAuth2 server, and learned to interface with Google Smart Home through Actions. The initial overhead was a lot but I feel like I’ve accomplished a lot as well! Not to mention that the MQTT server, Express OAuth2 app server, and Google Smart Home Actions are all reusable. I’m really interested in the Smart Home space, and I may try to expand my IoT devices repertoire to include some sensors that can monitor various things around my house and report back over MQTT. Soil monitor sensors, temperature sensors, and light sensors will be a lot of fun to monitor and analyze.&lt;/p&gt;

&lt;h1&gt;
  
  
  Where to Go Next
&lt;/h1&gt;

&lt;p&gt;The way I’m measuring the height of the desk right now is flakey at best. I’m using an IR distance sensor called a TFMini-S which mostly works. I’ve noticed that the height of the desk changes a little bit throughout the day as the ambient lighting of the room changes. I’ve ordered a rotary encoder so I can count the number of turns the rod running through the desk actually makes. This should give me much more accurate movements any time of the day. I also have access to a server that I host out of a basement somewhere that I might investigate running my own Mosquitto MQTT server, &lt;a href="https://nodered.org/"&gt;Node-RED&lt;/a&gt; server, and Express OAuth2 app if I’m feeling up to hosting something myself. Finally, right now all the electronics are just out in the open on my desk. I plan to enclose those so everything looks nice and tidy!&lt;/p&gt;

&lt;p&gt;Thanks for reading! Here are all the links from above for easy reference.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sensorsone.com/force-and-length-to-torque-calculator/"&gt;Torque Calculator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.ebay.com/itm/90-Right-Angle-Gearbox-Speed-Reducer-Transmission-Ratio-1-1-Shaft-8mm-DIY-Part/383629813206?ssPageName=STRK%3AMEBIDX%3AIT&amp;amp;var=652041748087&amp;amp;_trksid=p2060353.m2749.l2649"&gt;90 degree right angle gear box&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://apps.apple.com/us/app/ble-terminal-bluetooth-tools/id1511543453"&gt;BLE Terminal&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://platform.io/"&gt;Platform.IO&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/bloveless/tfmini-s"&gt;TFMini-S Arduino Driver&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.google.com/assistant/smarthome/develop/create"&gt;Google Smart Home Actions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://oauth2-server.readthedocs.io/en/latest/"&gt;Node OAuth2 Server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/oauthjs/express-oauth-server"&gt;Express OAuth2 Server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/bloveless/esp32-iot-desk-server/blob/main/model.js"&gt;ESP32 IoT Desk Server model.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/bloveless/esp32-iot-desk-server/blob/main/index.js"&gt;ESP32 IoT Desk Server index.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/bloveless/esp32-iot-desk-server/blob/main/google_actions.js"&gt;ESP32 IoT Desk Server google_actions.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.google.com/assistant/smarthome/traits"&gt;Google Smart Home Device Traits&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ngrok.com/"&gt;NGROK&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/bloveless/esp32-iot-desk-mqtt/blob/master/src/main.cpp"&gt;ESP32 IoT Desk Firmware&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nodered.org/"&gt;Node-RED&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.heroku.com/"&gt;Heroku&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.heroku.com/pricing"&gt;Heroku Hobby Plan&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://devcenter.heroku.com/articles/buildpacks"&gt;Heroku Buildpacks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Hole_punching_(networking)"&gt;Wikipedia Hole Punching&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pdos.csail.mit.edu/papers/p2pnat.pdf"&gt;MIT Paper on Hole Punching by Bryan Ford et al.&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>heroku</category>
      <category>iot</category>
      <category>esp32</category>
      <category>smarthome</category>
    </item>
    <item>
      <title>Adding IoT to my home office desk: Part 1</title>
      <dc:creator>Brennon Loveless</dc:creator>
      <pubDate>Thu, 19 Nov 2020 21:55:36 +0000</pubDate>
      <link>https://dev.to/bloveless/adding-iot-to-my-home-office-desk-part-1-28nc</link>
      <guid>https://dev.to/bloveless/adding-iot-to-my-home-office-desk-part-1-28nc</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In this article, I will show you how I converted my manual hand crank desk into an automated IoT connected desk. I’ll be talking about how to size and pick motors, and how to connect your custom IoT devices to Google using Heroku as a public interface.&lt;/p&gt;

&lt;p&gt;In short, there are two sides of this tech project: the first is to get from Google to Heroku using voice commands, and the second is to get from Heroku to the desk using MQTT. MQTT is the protocol of IoT, and I’ll explain some of the reasons why it is a good solution for IoT as well as some hurdles that it will help you overcome.&lt;/p&gt;

&lt;p&gt;First and foremost, I’m doing this just for fun! I’m completely open to suggestions and I’m more than happy to learn something new from you, so feel free to leave me any suggestions. Hopefully you’ll find something entertaining in this article and that motivates you to get out there and build something!&lt;/p&gt;

&lt;p&gt;With that being said, let’s get started!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3sTgGbN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/48efgyxo0ls5k3965i2r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3sTgGbN9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/48efgyxo0ls5k3965i2r.jpg" alt="The original hand crank for the desk"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The original hand crank for the desk&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Hardware
&lt;/h1&gt;

&lt;p&gt;The first, and arguably the most difficult, part was to modify the desk. In its past life, the desk had a removable hand crank that sat at the edge of the desk. Initially I thought about attaching something to the hand crank port so that the desk would remain unaltered. I purchased various gears to figure out how to attach the motor to the desk, but to no avail. Then I had an idea: There is a rod that runs the length of the desk that connects the two legs of the desk so they can be raised and lowered at the same time. If I fastened a gear that fit around the rod, then I would be able to use a belt to connect the rod to a motor. I would also still be able to add a motor to the desk without altering the desk all that much.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Torque Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I ordered the necessary gears and belt, and then searched Amazon for “High Torque Motor”. Lo and behold, I found a multitude of motors that matched my specific needs — or so I thought. I bought a small “high torque” motor and waited nearly a month for the gears to ship from China. I was so excited when they arrived! I couldn’t wait for the weekend to put it all together and finally have my motorized desk.&lt;/p&gt;

&lt;p&gt;Things did not go according to plan. I spent the day cutting a hole in a metal shield on the desk to run the belt through. At the time, I only had manual tools, so it took longer than I’m willing to admit. As it got closer to the end of the day, I finally finished putting everything together and was ready to try out the desk.&lt;/p&gt;

&lt;p&gt;I plugged in the motor, turned the voltage up on my bench power supply, and… nothing happened. A few moments later, the motor started spinning and grinding the teeth off the belt I purchased. I learned two important lessons from this: First of all, the belt was obviously not up to the challenge and a “high torque” motor doesn’t mean “I can lift anything in the world”. Secondly, look at how small that motor is compared to my fingers. It’s tiny!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NhYtyDkZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/her51fc9in3z9bxs55e7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NhYtyDkZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/her51fc9in3z9bxs55e7.jpg" alt="The photo on the left is the motor and the belt. Top right is the gear attached to the desk (you’ll see later more of what is going on here). Bottom right is the motor in position on the desk."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The photo on the left is the motor and the belt. Top right is the gear attached to the desk (you’ll see later more of what is going on here). Bottom right is the motor in position on the desk.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;An appropriate motor&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I needed to do some math to calculate how much torque was required to lift the desk so I could select the right motor. Off to Google for this one!&lt;/p&gt;

&lt;p&gt;I was surprised to discover how simple it was to calculate the torque necessary.&lt;/p&gt;

&lt;p&gt;T = F * r&lt;/p&gt;

&lt;p&gt;Or torque is a function of force multiplied by the lever arm length.&lt;/p&gt;

&lt;p&gt;Well, I had a lever arm (the hand crank) now I just needed a way to measure the force necessary to easily turn the lever arm. I loaded up my desk, tied a milk jug to the handle, and gradually added water to it until the lever arm spun. Then, I rotated the handle to the top with the filled milk jug and made sure that the weight could easily turn the handle. I discovered that the lever arm was 11cm and the force required was 4 lbs. I used &lt;a href="https://www.sensorsone.com/force-and-length-to-torque-calculator/"&gt;this calculator&lt;/a&gt; to figure out that I needed a motor capable of providing at least 19.95 kg/cm of torque. Let the shopping begin!&lt;/p&gt;

&lt;p&gt;I decided to make irreversible changes to the desk. I also knew that the rod that ran through the middle of the desk was hollow. I searched for a double shaft motor so I could cut the rod into pieces and reassemble it with the motor in the middle. I bought two 20 kg/cm motors to ensure I had plenty of torque necessary to lift the desk.&lt;/p&gt;

&lt;p&gt;Another beautiful Saturday rolled around and I hacked my desk to pieces. I split the rod in four places and filed down the shafts of the motors so they could be used to connect the rod back together again. I cut more holes in the metal shield for the two new motors to fit in. There was no belt this time, and the motors connected directly to the rod so these holes were quite large. As the evening approached I put all the pieces back together and loaded my desk with my office equipment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Naj7IJyl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lq68i98aopymkzhd3u1b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Naj7IJyl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lq68i98aopymkzhd3u1b.jpg" alt="The top two photos are the motors completely installed in the desk. The bottom photo is the rod that runs the length of the desk with the motors integrated with it."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The top two photos are the motors completely installed in the desk. The bottom photo is the rod that runs the length of the desk with the motors integrated with it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I wired up the motors and connected them to my bench power supply. Then, I turned it on and… THE DESK MOVED! I was more confident this time since I sized the motors appropriately. I had also doubled up on the motors just to be sure, but seeing it move was an awesome feeling.&lt;/p&gt;

&lt;p&gt;Let me tell you though, the desk was slow. Like really slow. I took a video to show one of my friends that it worked and had to use the time-lapse feature of my iPhone so he didn’t have to watch a five-minute video of my desk going from a sitting to a standing position. It was slow enough that I could start the desk movement, go grab a cup of coffee, come back, and still have to wait a minute before it reached the standing position.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Bc_MixJuHwI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RPM Matters, Final Version&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finally, what I learned boils down to two things: RPM and torque. I had nailed down the torque and now I just needed to find a motor with high enough RPM’s while maintaining the torque necessary to lift the desk.&lt;/p&gt;

&lt;p&gt;This wasn’t too hard to do, but I couldn’t find a double shaft motor like I had used previously so I had to find a &lt;a href="https://www.ebay.com/itm/90-Right-Angle-Gearbox-Speed-Reducer-Transmission-Ratio-1-1-Shaft-8mm-DIY-Part/383629813206?ssPageName=STRK%3AMEBIDX%3AIT&amp;amp;var=652041748087&amp;amp;_trksid=p2060353.m2749.l2649"&gt;90 degree 1:1 gear box&lt;/a&gt; that could convert the motor into a double shaft motor.&lt;/p&gt;

&lt;p&gt;Long story short, after another month of waiting for the perfect gear box to show up from China, and another Saturday, I had the desk moving at the speed I wanted!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s3Odh0zj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4njllqf8h34upyaa3q0d.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s3Odh0zj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4njllqf8h34upyaa3q0d.jpg" alt="My latest high torque motor on the left. Installed on the right."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;My latest high torque motor on the left. Installed on the right.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  A little more hardware and a lot more software
&lt;/h1&gt;

&lt;p&gt;I wasn’t satisfied with a huge bench power supply on my desk at all times just to adjust the height of my desk. I was also swapping the power leads of the supply in order to reverse the direction of the desk. Not a big deal but the goal of the desk project was to use up and down buttons as well as several presets to tell the desk to adjust to my preferred height.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bluetooth&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My first foray was to add Bluetooth to the desk. After all, it seems that nearly every device has Bluetooth these days. The phone also seems like a great interface for controlling something like this.&lt;/p&gt;

&lt;p&gt;I purchased a motor controller board, a Nordic NRF52 dev board (eventually I switch the Bluetooth board to an ESP32), some distance measuring sensors, and began mucking around with some control firmware.&lt;/p&gt;

&lt;p&gt;I’ll post links at the end of the article for all the software and firmware that I wrote for this project. Feel free to comment on that code as well since I’m in no way a firmware engineer and would love some pointers!&lt;/p&gt;

&lt;p&gt;As a quick introduction, the ESP32 is written in C++ using Arduino libraries to communicate with the BLE Terminal app on the phone. Setting up and configuring BLE is pretty complicated. Initially, you need to create all the characteristics for the values you’d like to control over BLE. Think of a characteristic like a single variable in your code. BLE wraps this variable in a bunch of handlers to retrieve and set the value of this variable.&lt;/p&gt;

&lt;p&gt;Then, your characteristics are packaged up in a service with a custom UUID that you provide to make your service unique and identifiable from the app. Finally, you must add this service to the advertisement payload in order for your service to be discoverable by other devices. When a remote device connects to your service and sends data via a characteristic, the desk will recognize that a user wants the desk to adjust to another preset height and will begin its work.&lt;/p&gt;

&lt;p&gt;In order to adjust the height, the desk has an TFMini-S LiDAR sensor mounted to the bottom to determine the current height. This sensor is funny because it is named LiDAR but doesn’t actually use a laser. It uses an LED and optics to determine the time-of-flight of the IR light. Anyway, the sensor determines the current height of the desk. Then, the control board determines the difference between the current height and the requested height, and activates the motor to spin in the necessary direction. Some of the code highlights are below but you can read the entire file &lt;a href="https://github.com/bloveless/esp32-iot-desk-ble/blob/master/src/main.cpp"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void setup()
{
   Serial.begin(115200);
   Serial2.begin(TFMINIS_BAUDRATE);
   EEPROM.begin(3); // used for saving the height presets between reboots

   tfminis.begin(&amp;amp;Serial2);
   tfminis.setFrameRate(0);

   ledcSetup(UP_PWM_CHANNEL, PWM_FREQUENCY, PWM_RESOLUTION);
   ledcAttachPin(UP_PWM_PIN, UP_PWM_CHANNEL);

   ledcSetup(DOWN_PWM_CHANNEL, PWM_FREQUENCY, PWM_RESOLUTION);
   ledcAttachPin(DOWN_PWM_PIN, DOWN_PWM_CHANNEL);

   state_machine = new StateMachine();
   state_machine-&amp;gt;begin(*t_desk_height, UP_PWM_CHANNEL, DOWN_PWM_CHANNEL);

   BLEDevice::init("ESP32_Desk");
  ...

   BLEServer *p_server = BLEDevice::createServer();
   BLEService *p_service = p_server-&amp;gt;createService(BLEUUID(SERVICE_UUID), 20);

   /* ------------------- SET HEIGHT TO PRESET CHARACTERISTIC -------------------------------------- */
   BLECharacteristic *p_set_height_to_preset_characteristic = p_service-&amp;gt;createCharacteristic(...);
   p_set_height_to_preset_characteristic-&amp;gt;setCallbacks(new SetHeightToPresetCallbacks());
   /* ------------------- MOVE DESK UP CHARACTERISTIC ---------------------------------------------- */
   BLECharacteristic *p_move_desk_up_characteristic = p_service-&amp;gt;createCharacteristic(...);
   p_move_desk_up_characteristic-&amp;gt;setCallbacks(new MoveDeskUpCallbacks());
   /* ------------------- MOVE DESK UP CHARACTERISTIC ---------------------------------------------- */
   BLECharacteristic *p_move_desk_down_characteristic = p_service-&amp;gt;createCharacteristic(...);
   p_move_desk_down_characteristic-&amp;gt;setCallbacks(new MoveDeskDownCallbacks());
   /* ------------------- GET/SET HEIGHT 1 CHARACTERISTIC ------------------------------------------ */
   BLECharacteristic *p_get_height_1_characteristic = p_service-&amp;gt;createCharacteristic(...);
   p_get_height_1_characteristic-&amp;gt;setValue(state_machine-&amp;gt;getHeightPreset1(), 1);
   BLECharacteristic *p_save_current_height_as_height_1_characteristic = p_service-&amp;gt;createCharacteristic(...);
   p_save_current_height_as_height_1_characteristic-&amp;gt;setCallbacks(new SaveCurrentHeightAsHeight1Callbacks());
   /* ------------------- GET/SET HEIGHT 2 CHARACTERISTIC ------------------------------------------ */
  ...
   /* ------------------- GET/SET HEIGHT 3 CHARACTERISTIC ------------------------------------------ */
  ...
   /* ------------------- END CHARACTERISTIC DEFINITIONS ------------------------------------------ */
   p_service-&amp;gt;start();

   BLEAdvertising *p_advertising = p_server-&amp;gt;getAdvertising();
   p_advertising-&amp;gt;start();

   xTaskCreate(
       updateDeskHeight,     // Function that should be called
       "Update Desk Height", // Name of the task (for debugging)
       1024,                 // Stack size
       NULL,                 // Parameter to pass
       5,                    // Task priority
       NULL                  // Task handle
   );
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a lot more going on in the main file but this code has enough context for us to see what is happening. You’ll notice that we are creating and configuring all the BLE callbacks for all the characteristics, including moving the desk manually, setting/retrieving the preset values, and most importantly, adjusting the desk to a specific preset.&lt;/p&gt;

&lt;p&gt;The image below shows me interacting with that characteristic to adjust the desk height. The last piece of the puzzle is the state machine, which knows the current height of the desk, the requested height of the desk from the user, and works to align those two pieces of data.&lt;/p&gt;

&lt;p&gt;So I finally had the desk doing everything I wanted. I could save heights into presets and recall those heights to move the desk into my favorite positions. I was using a &lt;a href="https://apps.apple.com/us/app/ble-terminal-bluetooth-tools/id1511543453"&gt;BLE Terminal&lt;/a&gt; app on my phone and computer so I could send the raw messages to my desk in order to control its position. This worked but I knew that the battle with BLE was just beginning. In order to have a seamless interface with my desk I would also need to learn how to write an iOS app so I didn’t have to remember the HEX codes to send to my desk to save a preset and recall a position.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k5_ONKkN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mr1dzuffjqgo4gp2di07.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k5_ONKkN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mr1dzuffjqgo4gp2di07.jpg" alt="The raw bluetooth interface… all that was left at this point was to learn how to program an iOS app…"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The raw bluetooth interface… all that was left at this point was to learn how to program an iOS app…&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then my wife said something that would change everything… "What if you could control it with your voice?"&lt;/p&gt;

&lt;p&gt;In addition to being way cooler, and adding to our growing list of Google Assistant devices in our house, I wouldn’t have to write an iOS app to control it. I also wouldn’t have to take my phone out of my pocket to set the desk height. The little wins!&lt;/p&gt;

&lt;p&gt;This article is getting a little long so I’ll split this into a second article where I discuss adding Google Smart Home IoT &lt;a href="https://dev.to/bloveless/adding-iot-to-my-home-office-desk-part-2-2fcd"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>heroku</category>
      <category>iot</category>
      <category>smarthome</category>
      <category>esp32</category>
    </item>
  </channel>
</rss>
