<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chris Chinchilla</title>
    <description>The latest articles on DEV Community by Chris Chinchilla (@chrischinchilla).</description>
    <link>https://dev.to/chrischinchilla</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chrischinchilla"/>
    <language>en</language>
    <item>
      <title>How I set up a RaspberryPi to share my files and media
</title>
      <dc:creator>Chris Chinchilla</dc:creator>
      <pubDate>Wed, 24 Nov 2021 16:58:24 +0000</pubDate>
      <link>https://dev.to/chrischinchilla/how-i-set-up-a-raspberrypi-to-share-my-files-and-media-4mif</link>
      <guid>https://dev.to/chrischinchilla/how-i-set-up-a-raspberrypi-to-share-my-files-and-media-4mif</guid>
      <description>&lt;p&gt;Over the past months, I've been slowly assembling a suite of self-hosted tools and services on a shiny new RaspberryPi 400, and finally, I think I am finished and ready to write up my experiences. At the least, it will help remind me what I have, but I hope it might also help others taking similar journeys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disclaimer
&lt;/h2&gt;

&lt;p&gt;Blogs take time to write, and I hope this helps you. Some of the products here are affiliate links, but If you don't like affiliate links, but would still like to say thanks, subscribe to my &lt;a href="https://www.youtube.com/channel/UCgnrx8qi4qhmN6sBebdDrmg"&gt;YouTube&lt;/a&gt; or &lt;a href="https://www.twitch.tv/chrischinchilla"&gt;Twitch&lt;/a&gt; channel, or find &lt;a href="https://chrischinchilla.com/support/"&gt;other ways to support me on my website&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hardware - RaspberryPi 400
&lt;/h2&gt;

&lt;p&gt;I chose a RapsberryPi 400 because of the built-in keyboard. I remember my experiments with a more traditional RaspberryPi in the past and found the short-term need for a keyboard annoying. Though I admit, I have used the keyboard (and mouse included in the package) probably for less than 5 minutes and am now quite happy with VNC and SSH connections for management, and considering swapping it for another RaspberryPi with a smaller footprint. Those concerns and changes of opinion aside, the 400 has been flawless, never had any memory or performance issues, and the only time I had any issues with it was when a cat sitter accidentally unplugged it.&lt;/p&gt;

&lt;p&gt;I connected the 400 directly to my router via Ethernet, a &lt;a href="https://en.avm.de/products/fritzbox/fritzbox-6490-cable/"&gt;Fritz!Box 6490&lt;/a&gt;, which are feature-full and quite fantastic, but largely unavailable outside Europe. Plugged into the Pi is a &lt;a href="https://www.amazon.com/gp/product/B07CRGSR16/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B07CRGSR16&amp;amp;linkId=1d4e8f90af6e656f102d088ed29b34ba"&gt;4TB Seagate Backup Plus&lt;/a&gt; for archive data I don't access that much, and &lt;a href="https://www.amazon.com/gp/product/B015CH1PJU/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B015CH1PJU&amp;amp;linkId=bb51abb12c16b056c820448c4dd1f6fc"&gt;a SanDisk Flash drive&lt;/a&gt; for regularly accessed data, i.e., Nextcloud data (more on that later).&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic Raspbian setup
&lt;/h2&gt;

&lt;p&gt;I didn't change much from the default &lt;a href="http://www.raspbian.org/"&gt;Raspbian&lt;/a&gt; settings, but here's a few small things I changed after installation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uninstalled packages I would never need, such as LibreOffice, etc.&lt;/li&gt;
&lt;li&gt;Enabled SSH and VNC, and disabled just about everything else I wasn't going to need, such as WiFi, Bluetooth, and audio.&lt;/li&gt;
&lt;li&gt;I installed &lt;a href="https://cockpit-project.org"&gt;Cockpit&lt;/a&gt;, a convenient browser-based interface to manage services, logs, updates, and more on a Linux machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Allowing access across the internet
&lt;/h2&gt;

&lt;p&gt;As I step through the various services in this post, to access them, you need to expose ports to the internet. How you do this depends a lot on your setup, and you should be aware of the prerequisites and security aspects of doing so.&lt;/p&gt;

&lt;p&gt;My Fritzbox allows you to expose individual ports to devices and matching ports and configure an address online (with an unmemorable address which I guess makes it more secure 🤷‍♂️). If you want to, it also lets you set up an SSL certificate (but not in the most flexible way), a VPN, use a Dynamic DNS service, and more. I can't really show you what I did without giving away a bit too much detail about my setup, but exposing the ports was all I needed to do. Your setup may be more or less complex if you have a different router or need to route traffic through another service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Self-hosted cloud with Nextcloud
&lt;/h2&gt;

&lt;p&gt;My initial intention with the RaspberryPi was to attempt to reduce my personal dependence on cloud-based file storage such as Dropbox or Google Drive. I'll sadly always need access to some of these for collaboration or some apps that only sync with them (&lt;a href="https://scrivener.tenderapp.com/help/kb/cloud-syncing/using-scrivener-with-cloud-sync-services"&gt;Looking at you Scrivener&lt;/a&gt;!), but I want to reduce it as much as possible.&lt;/p&gt;

&lt;p&gt;I can't remember how I ended up discovering it, but I ended up using the wonderful &lt;a href="https://ownyourbits.com/nextcloudpi/"&gt;NextCloudPi from Own your own bits&lt;/a&gt;. Finding the correct and/or best instructions to follow is a little confusing, and I feel like I chanced upon a random post somewhere that ended up being the easiest option. Whether my memory fails me now, I am not sure, but if you want to take a more organised approach, &lt;a href="https://docs.nextcloudpi.com/en/how-to-install-nextcloudpi/"&gt;follow the official documentation&lt;/a&gt;. I don't recall having any major issues, and the installer took a little time to download, install, and setup all dependencies that Nextcloud needs. Once it's installed, access the configuration interface via one of &lt;a href="https://docs.nextcloudpi.com/en/how-to-access-nextcloudpi/"&gt;the methods mentioned in the documentation&lt;/a&gt;; again, whichever option suits you varies.&lt;/p&gt;

&lt;p&gt;NextcloudPi handled a lot of the configuration for me, but there's a couple of specific things worth bringing to attention or changing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up the data directory for Nextcloud, which in my case was the USB drive, formatted to BTRFS.&lt;/li&gt;
&lt;li&gt;I automount all USB drives. I am not sure if I need this enabled, but just in case.&lt;/li&gt;
&lt;li&gt;I enabled the Web UI. I like UIs 😁.&lt;/li&gt;
&lt;li&gt;I forced HTTPS. I think this is essential when making the instance public.&lt;/li&gt;
&lt;li&gt;I activated "pretty URLs", &lt;em&gt;index.php&lt;/em&gt; in URLs is so early 2000s.&lt;/li&gt;
&lt;li&gt;I have the RaspberryPi on a static IP address, so I enabled that in NextcloudPi too.&lt;/li&gt;
&lt;li&gt;I have most of the autoupdate features enabled.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SSL Certificates
&lt;/h2&gt;

&lt;p&gt;So I can access the domain over HTTPS across the internet, I initially used the configuration option provided by NextcloudPi, but as I also needed an SSL certificate for other subdomains on the domain provided by Fritzbox, I ended up using &lt;a href="https://certbot.eff.org/instructions"&gt;Let's Encrypt with certbot&lt;/a&gt;, following instructions in their documentation.&lt;/p&gt;

&lt;p&gt;If you only need SSL for Nextcloud, then using the feature is probably enough for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  File sharing over the internet
&lt;/h2&gt;

&lt;p&gt;I wanted to share two other folder and file locations across my internal network as well as the internet. The first is a large archive of miscellaneous data from across my years of computer use, and a couple of large &lt;a href="https://calibre-ebook.com/"&gt;Calibre&lt;/a&gt; libraries that contain things like RPG books and comics, i.e., several GBs in size each.&lt;/p&gt;

&lt;p&gt;I wanted a solution that was relatively easy to access on a local network and elsewhere. Preferably the same solution used the same way in both cases. This requirement ruled out &lt;a href="https://www.samba.org/"&gt;Samba&lt;/a&gt; shares, as it's not designed for sharing across the internet. I looked at &lt;a href="https://en.wikipedia.org/wiki/Network_File_System_(protocol)"&gt;NFS&lt;/a&gt;, but encountered speed and reliability issues, and recent macOS support is poor with documented workaround to enable version 4 seemingly not working anymore.&lt;/p&gt;

&lt;p&gt;So far I am most happy with using the curious &lt;a href="https://en.wikipedia.org/wiki/SSHFS"&gt;SSHFS&lt;/a&gt;, which allows you to mount directories across networks as an extension to the SSH protocol. It needs no extra components on server side, and I have found it performant and stable. On client side it needs some extra components, depending on your operating system. Unfortunately, on macOS, this requires installing the closed source &lt;a href="https://osxfuse.github.io"&gt;macFUSE&lt;/a&gt;, it used to be open source, and &lt;a href="https://github.com/osxfuse/osxfuse/issues/616"&gt;recent decisions to become closed source have attracted much discussion&lt;/a&gt;. This license change has also caused issues for projects that used to bundle macFUSE, including installing it with Homebrew.&lt;/p&gt;

&lt;p&gt;Anyway, I digress. These issues aside, SSHFS works really well, and I have had next to no issues with speed or stability. If you're happy mounting and unmounting drives from the command line that is. If you are reading this post, then I assume you are, but I have been pondering the ways my not-so-happy-with-the-command-line partner could mount and unmount the drives. I haven't figured that out yet, so watch this space for updates. For more details on the commands to use, I have so far (bizarrely) found &lt;a href="https://igppwiki.ucsd.edu/display/igppwiki/Mounting+Network+Shares+with+SSHFS+on+macOS"&gt;this wiki page&lt;/a&gt; from &lt;a href="https://www.igpp.ucsd.edu"&gt;the Institute of Geophysics and Planetary Physics &lt;/a&gt; the most useful guide, but a quick search finds many more.&lt;/p&gt;

&lt;p&gt;So I didn't need to keep using a password, I set up password-less login using key pairs, &lt;a href="https://www.redhat.com/sysadmin/passwordless-ssh"&gt;following these instructions&lt;/a&gt;, and now this means I can mount drives with commands such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sshfs pi@&lt;span class="o"&gt;{&lt;/span&gt;RASPBERRYPI_ADDRESS&lt;span class="o"&gt;}&lt;/span&gt;:/media/Data/Calibre /Volumes/Calibre &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;volname&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Calibre
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And unmount with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;umount /Volumes/Calibre
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Sharing eBook libraries over the internet
&lt;/h2&gt;

&lt;p&gt;Calibre is one of those tools with many issues, and every few months, you look for an alternative. But every time you look for an alternative, you can't find one and carry on making do with its esoteric ways. Walled garden ecosystem tools aside, it's basically the only option for eBook management, editing, serving, and more. I have cultivated multiple libraries containing gigabytes of eBooks, PDFs, CBZs, and mobi files for years, and they're precious to me. I wanted to find a way to offload these files to network storage, give my partner the ability to access them in their copy of Calibre, and be able to access them in reader applications (or elsewhere) via the &lt;a href="https://opds.io"&gt;OPDS&lt;/a&gt; protocol.&lt;/p&gt;

&lt;p&gt;I weighed up a variety of Calibre and OPDS aligned tools but eventually settled on just getting the &lt;a href="https://manual.calibre-ebook.com/server.html"&gt;Calibre server&lt;/a&gt; component to work well for my aims. If you &lt;a href="https://packages.debian.org/buster/calibre"&gt;install Calibre from Debian repositories&lt;/a&gt;, it's two versions behind, and &lt;a href="https://calibre-ebook.com/download_linux"&gt;the official method involves running a shell command&lt;/a&gt;. For reasons I can't even remember now, I had issues with following the official steps (ARM processor maybe?) and stuck with the outdated version, experiencing no compatibility issues so far. I'll upgrade it at a later date.&lt;/p&gt;

&lt;p&gt;Another caveat. Once you start searching around for options to sharing Calibre libraries, you generally encounter harsh warnings discouraging you from doing so, accompanied by workarounds to get it to kinda work. The setup I describe is mostly just me accessing the libraries, and while I intend my partner to access them too, it's unlikely we will ever access them at the same time and cause the issues Calibre warns against. As always, your mileage may vary, and the workarounds may work fine for you if you're careful.&lt;/p&gt;

&lt;p&gt;Before I get into the solution, let me summarise the requirements, as that explains some of the decisions I made. When I refer to a "library", I mean the Calibre library file(s) as well as the actual folders and files, as this is where some of the complexity came in to play.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One library that is my "current reads", I wanted this hosted in my personal Nextcloud sync folder so I could sync it to my Mac, as well as available via OPDS for reader application.&lt;/li&gt;
&lt;li&gt;Three other libraries that host comics, RPG books, and an archive of books we've read but want to keep somewhere. These libraries needed to be accessible in Calibre, network-mounted storage is fine, and available via OPDS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I start with the three archive libraries as they are the simplest. They are spread across the various storage attached to the RaspberryPi, based on which I access the most frequently (USB drive for comics and RPGs, HDD for book archive). There are a bunch of CLI and GUI ways to add and manage libraries with Calibre, but because I am used to the GUI version, I created all the libraries there and pointed the libraries at the various folder locations.&lt;/p&gt;

&lt;p&gt;The "current reads" library proved more problematic to get working as Nextcloud expects certain permissions on any storage it accesses. This includes those you set as "external storage", which is how I set up the folder so it could live outside of the main Nextcloud folder, and Calibre and OPDS could access it.&lt;/p&gt;

&lt;p&gt;I followed &lt;a href="https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage/local.html"&gt;the local external storage documentation&lt;/a&gt;, but then ran into issues with using the permissions Nextcloud mentions, as then Calibre was unable to write metadata. I took the drastic step of giving full read and write permissions to make every application and service happy. I didn't face any issues with this approach, but there's probably some good reasons not to do it. I am not completely sure what other solution there might have been.&lt;/p&gt;

&lt;p&gt;For the Calibre server to resume after restarts, it's best to add it as a service, and there are a lot of different instructions for this online, but I found these steps best for up to date versions or Raspbian.&lt;/p&gt;

&lt;p&gt;Create a systemd service in &lt;em&gt;/etc/systemd/system/calibre-server.service&lt;/em&gt; with something like the following depending on what setup you want:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=calibre content server
After=network.target

[Service]
Type=simple
User={USER_NAME}
Group={GROUP_NAME}
ExecStart=calibre-server \
--port=8090 --enable-use-bonjour

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of this I based on the instructions in &lt;a href="https://manual.calibre-ebook.com/server.html#id13"&gt;the Calibre Server docs&lt;/a&gt;, including the steps for enabling and starting the service. I couldn't find a full list of arguments for the &lt;code&gt;calibre-server&lt;/code&gt; command, but using &lt;code&gt;--help&lt;/code&gt; gets you started. I didn't specify a library path, which means the server loads all libraries I defined with the GUI. I changed the port to something that doesn't conflict with other services and enable &lt;a href="https://developer.apple.com/bonjour/"&gt;bonjour&lt;/a&gt;. I am not completely sure if bonjour works, but enabled it anyway. There's more I want to configure over time, &lt;a href="https://manual.calibre-ebook.com/server.html#id9"&gt;especially enabling user accounts&lt;/a&gt;, and/or SSL support.&lt;/p&gt;

&lt;p&gt;That took care of the OPDS share, so what about when I want to connect to the libraries on my Mac? For this I mount the relevant drive hosted on the Pi with SSHFS and setup the library in my local copy of Calibre. When I am done, I unmount the drive. Depending on the file size, transfer speeds are reasonably fast, and I haven't experienced any disconnection issues so far.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sharing movies and TV shows
&lt;/h2&gt;

&lt;p&gt;I'm old, so I have an archive of ripped DVDs from when they were a thing. I don't watch them that much, but it's nice to have them around, especially as there are a lot of movies and TV shows that are hard to find on streaming services (legally). In the past, I had a Plex server setup, but it was a lot of overhead for something I didn't use much, and would frequently reset the library, or I would change network setups, and the library wouldn't work anymore, etc. I also found Plex's push for commercial versions got overwhelming, and my previous experiments with XBMC found it overly complex for non-technical users. You can install VLC or other apps (for example, our TV has one builtin) that can access media shared via &lt;a href="https://www.lifewire.com/what-is-dlna-1847363"&gt;DLNA&lt;/a&gt; or &lt;a href="https://nordvpn.com/blog/what-is-upnp/"&gt;UPnP&lt;/a&gt; on pretty much any device, so this time I kept it simple and used minidlna &lt;a href="https://pimylifeup.com/raspberrypi-minidlna/"&gt;using these instructions&lt;/a&gt;. DLNA and UPnP (in particular) can open a lot of security risks, but it's only for local network usage, so I was mostly OK with using it. If I want to watch something stored on the RaspberryPi when not at home, Instead I connect to the relevant network drive, download the file, and watch locally. I tested connecting to the DLNA share from Android, our TV, and macOS, and all worked well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pi punch
&lt;/h2&gt;

&lt;p&gt;And that's it for now. It's taken a lot of time and experimentation to get to this point, and now I review the post it doesn't seem like much at all, but that's frequently the way with technology. I've tested the setup(s) at home, in Berlin, and when travelling, and have so far had no issues, variable speeds aside. I am still figuring out the best ways to make everything accessible to my partner, and that's something for future posts. That said I have been using Nextcloud to send files to clients and other external parties, and once I solved the SSL issue, that has also worked well.&lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>nextcloud</category>
      <category>ebooks</category>
      <category>cloud</category>
    </item>
    <item>
      <title>My hardware and software for audio and video production</title>
      <dc:creator>Chris Chinchilla</dc:creator>
      <pubDate>Fri, 24 Sep 2021 09:27:07 +0000</pubDate>
      <link>https://dev.to/chrischinchilla/my-hardware-and-software-for-audio-and-video-production-ff5</link>
      <guid>https://dev.to/chrischinchilla/my-hardware-and-software-for-audio-and-video-production-ff5</guid>
      <description>&lt;p&gt;I’ve been running podcasts for years, and while I worked on some video courses in the past, over the past year I have invested more time in my audio and video setup, primarily for live-streaming. After months and months of getting It to a point where I am “kind of” happy with it, I thought it was high time I documented it. Partly so others can learn from my setup, and partly so I can keep tabs on it myself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disclaimer
&lt;/h2&gt;

&lt;p&gt;I spend a lot of time on making videos, and some of the product links here are affiliate links to cover even some of the time and money I have invested. If you don’t like affiliate links, but would still like to say thanks, subscribe to my &lt;a href="https://www.youtube.com/channel/UCgnrx8qi4qhmN6sBebdDrmg"&gt;YouTube&lt;/a&gt; or &lt;a href="https://www.twitch.tv/chrischinchilla"&gt;Twitch&lt;/a&gt; channel, or find &lt;a href="https://chrischinchilla.com/support/"&gt;other ways to support me on my website&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  A vs B setup
&lt;/h2&gt;

&lt;p&gt;I am lucky enough to have my own small office/studio, and naturally this is where I do most of my work. But I also need to sometimes do work at home (often interviews with people in the USA), where it’s too late to be in the office, and I’d rather do it home. I have conducted interviews travelling before, and while I’m doing less right now, it’s still like to have the option(s) available.&lt;/p&gt;

&lt;p&gt;I call these my “A” and “B” setups. Or the setup that largely stays fixed in the office (A), and the setup that I use at home, or travelling (B). There are some overlaps, but primarily with software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Audio
&lt;/h2&gt;

&lt;p&gt;The audio part is somewhat easier, and has been mostly the same hardware for a while, but configured in different ways.&lt;/p&gt;

&lt;h3&gt;
  
  
  A setup
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZRfUthLX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210827_103429-800-8f875f4c3.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZRfUthLX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210827_103429-800-8f875f4c3.jpeg" alt="A Blue Yeti mic"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---f97W-QY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210827_103456-800-1e1aa68c4.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---f97W-QY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210827_103456-800-1e1aa68c4.jpeg" alt="Boom arm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.amazon.com/gp/product/B00N1YPXW2/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B00N1YPXW2&amp;amp;linkId=b3be1d0a2ece335db24846409e9db7e0"&gt;A Blue Yeti&lt;/a&gt;, with a foam wind/pop filter. It’s mounted on &lt;a href="https://www.amazon.com/gp/product/B089SJGQBH/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B089SJGQBH&amp;amp;linkId=8fdc9b8317dee12c5c72ee9d4e310552"&gt;a boom arm&lt;/a&gt; most of the time, but I do also have a straight mic stand with a Justin (home brand of “&lt;a href="http://justmusic.de"&gt;Just Music&lt;/a&gt;”, a German music store) reflection filter for more precise pure audio recording to counter the echoing high ceilings of many Berlin rooms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5AM90504--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210827_103526-800-bde22f8d8.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5AM90504--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210827_103526-800-bde22f8d8.jpeg" alt="Reflection filter"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  B setup
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NUNMJyYA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210830_071150-800-238cff380.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NUNMJyYA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210830_071150-800-238cff380.jpeg" alt="Exjoy camera mic combo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Depending on my mood I sometimes use an &lt;a href="https://www.amazon.com/gp/product/B074VF5ZLL/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B074VF5ZLL&amp;amp;linkId=112fc73d273299a778527842f760430a"&gt;iRig HD 2&lt;/a&gt; mounted on a pretty crumby boom arm if I am not using an external camera, or this odd phone holder (more on that later) &lt;a href="https://www.amazon.com/gp/product/B07Z7X4B24/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B07Z7X4B24&amp;amp;linkId=6af51f1c6f701531015420ce0cec3af6"&gt;mic stand combo thing from Exjoy&lt;/a&gt;. My other mic option is a &lt;a href="https://www.amazon.com/gp/product/B016V3663Y/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B016V3663Y&amp;amp;linkId=6e93e9024f32ccdbf50d1ce202126864"&gt;lav mic from iRig&lt;/a&gt;. It’s analogue, so needs a headphone jack, and curiously also needs headphones plugged into its connector, so even though I am often using it with my OnePlus buds for calls, I typically have an audio cable also plugged into it, basically doing nothing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Software
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4RQbIEd---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/audition-800-30df79fda.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4RQbIEd---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/audition-800-30df79fda.jpg" alt="Adobe Audition"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For recording (and editing) just me and a mic, I generally use &lt;a href="https://www.adobe.com/products/audition.html"&gt;Adobe Audition&lt;/a&gt;, I have been experimenting with demos of Ableton and Logic, but not committed to either yet. For recording a mic plus an application or VoIP call, I pretty much always use &lt;a href="https://rogueamoeba.com/audiohijack/"&gt;Audio Hijack&lt;/a&gt;, it’s a classic macOS app for a reason. In fact I have something of a chain of Rogue Amoeba software I use for audio and video, including &lt;a href="https://rogueamoeba.com/farrago/"&gt;Farrago&lt;/a&gt; (SFX) and &lt;a href="https://rogueamoeba.com/loopback/"&gt;Loopback&lt;/a&gt; (audio routing). I know there are open source and free equivalents for both, but especially with Loopback, I found its flexibility and ease of use worth the money. I don’t really use any effects on input aside from some volume overrides, but probably should.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Wg-UxYT7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/ra-apps-800-b1b4d5266.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wg-UxYT7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/ra-apps-800-b1b4d5266.jpg" alt="Farrago, Audio Hijack, and Loopback from Rogue Amoeba"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Video
&lt;/h2&gt;

&lt;p&gt;Ok, this is where things get a bit more complex unsurprisingly. I mix up hardware and software for some obvious reasons, as there’s some software each camera needs, and then there’s software I use no matter what.&lt;/p&gt;

&lt;h3&gt;
  
  
  A setup
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bqi8U2zD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210827_105508-800-78686ccfb.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bqi8U2zD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210827_105508-800-78686ccfb.jpeg" alt="Logitech Streamcam"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x0nNA7ug--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210827_105526-800-6bc6ea036.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x0nNA7ug--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210827_105526-800-6bc6ea036.jpeg" alt="UTEBIT magic arm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://www.amazon.com/gp/product/B07TZT4Q89/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B07TZT4Q89&amp;amp;linkId=bf7e3dedf4fcc4bcb18d14a6a8447371"&gt;Logitech Streamcam&lt;/a&gt;. I am still not sure if it’s worth the extra over the classic &lt;a href="https://www.amazon.com/gp/product/B01LXCDPPK/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B01LXCDPPK&amp;amp;linkId=89d6665d4a8faa4c16677840178d9a2d"&gt;Logitech C922&lt;/a&gt;, but there you go, I have it now. It’s mounted on a &lt;a href="https://www.amazon.com/gp/product/B07H77KB7R/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B07H77KB7R&amp;amp;linkId=5bf2d279aa6549ba67662550d6502bb7"&gt;UTEBIT magic arm&lt;/a&gt; sort of behind my monitor, but I am not completely happy with the placement, as it makes me look a bit too far away. I have tried it in various positions with the same lack of satisfaction, it’s hard to balance the whole “good camera shot” with being able to see your screen, and I am not there yet. I mostly use the camera it as a straight input into an application, but I am starting to experiment more with using the supplied &lt;a href="https://www.logitech.com/en-us/product/capture/"&gt;Logi Capture&lt;/a&gt; software for more fine-grained camera control, and then routing that into an application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bvOkR5hc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/lsc-800-3bbb75433.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bvOkR5hc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/lsc-800-3bbb75433.jpeg" alt="Logi Capture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  B setup
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7_DKm844--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/iriun-612-78b8610e9.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7_DKm844--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/iriun-612-78b8610e9.jpeg" alt="Iriun Camera"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Naturally at times when I am just having a call that I use the camera built into my laptop, but for recording and streams I actually use my phone camera. Uh huh, my phone camera. The phone is kind of the least relevant part as any modern phone camera is decent, but my current is a One Plus Nord. The in-built camera software does do a lot of annoying auto focusing that can sometimes make my video have this odd kind of “wishy washy” look, but most people don’t notice it. I actually much preferred using my old Essential phone, as the camera software did next to nothing. Ironically everyone complained about the camera software on that phone, and this made it a perfect external camera. There are plethora applications now that let you use a smart phone as an external camera, but mine of choice are &lt;a href="https://www.dev47apps.com/obs/"&gt;DroidCam OBS&lt;/a&gt; (if only recording with OBS) and &lt;a href="https://iriun.com"&gt;Iriun&lt;/a&gt; for pretty much everything else. I was suitably impressed with both, to pay for the full versions. Iriun is one of those “Virtual Camera” applications, which means on macOS, you sometimes need to jump through the unsigning hoop to get it to work in some applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Software
&lt;/h3&gt;

&lt;p&gt;OK, this is where it gets complicated. Let’s add more subheadings…&lt;/p&gt;

&lt;h4&gt;
  
  
  Livestreaming
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JkZtPMO---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/obs-800-15c10e4ae.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JkZtPMO---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/obs-800-15c10e4ae.jpeg" alt="Open Broadcast Studio"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yiUaV4EK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/restream-800-c6f15a6e0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yiUaV4EK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/restream-800-c6f15a6e0.jpeg" alt="Restream Studio"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When it’s just me, &lt;a href="https://obsproject.com"&gt;Open Broadcast Studio (OBS)&lt;/a&gt;, I think I could write a separate post on my OBS setup, so that’s all I’ll say for now 😅. If I have guests then &lt;a href="https://restream.io/studio"&gt;Restream studio&lt;/a&gt;, which is web-based. I am not a huge fan of doing things in the browser, but it’s simple for others to join, and gives me enough flexibility for setup, quality, and graphics. I also use it deliver my livestreams (via an RMTP feed) when I use OBS, so it’s multipurpose. It’s not cheap, but worth it, and they often have discount codes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Recording
&lt;/h4&gt;

&lt;p&gt;I vary between Quicktime, &lt;a href="https://www.techsmith.com/screen-capture.html"&gt;Snagit&lt;/a&gt;, Logi Capture, and OBS, depending if I want to record a camera, or record multiple cameras, or record camera plus screen share. And quite often depending on the quality I want to record and my mood. I am not entirely satisfied with the video quality of any of these options sometimes, and I am never entirely sure if it’s the camera or the software.&lt;/p&gt;

&lt;h4&gt;
  
  
  Editing
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0LLIyhK2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/premiere-800-085a4ec6f.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0LLIyhK2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/premiere-800-085a4ec6f.jpeg" alt="Adobe Premiere"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Mostly &lt;a href="https://www.adobe.com/products/premiere.html"&gt;Adobe Premiere&lt;/a&gt;, but sometimes Quicktime or Snagit for something simple. I have tried &lt;a href="https://www.blackmagicdesign.com/products/davinciresolve/"&gt;Da Vinci Resolve&lt;/a&gt;, but I couldn’t get on with it, I am too used to Premiere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lighting
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s5wgQdiA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210828_122608-800-8d65c74a1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s5wgQdiA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210828_122608-800-8d65c74a1.jpeg" alt="Neweer LED lights"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I know lighting is important, and I am late to that party, muddling through for some time. I acquired some cheap office clearance photography lights last year which resulted in reasonable lighting but took up too much space, and annoyed my office mates. I switched to a set of the &lt;a href="https://www.amazon.com/gp/product/B07T8FBZC2/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B07T8FBZC2&amp;amp;linkId=66ade655d70c0c97d780c85b79cc3aac"&gt;Neweer LED desk lamps&lt;/a&gt;. I am still getting the setup quite right, and I am especially unhappy with the shadows they throw behind me. I know the techniques to fix this, but I am trying to balance space and convenience with quality 😉. I often add slight bit of warmth to lighting by also just switching on my &lt;a href="https://www.amazon.com/gp/product/B0913K3X5J/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B0913K3X5J&amp;amp;linkId=38db7564e93c969a69ee9e1d7006df5e"&gt;Ikea Symfonisk&lt;/a&gt; desk lamp.&lt;/p&gt;

&lt;p&gt;One of my biggest issues with lighting in the office is that we have a giant window that looks out over Berlins Spree river. It’s a wonderful view, but terrible for consistent lighting, especially as the sun can reflect off the water, and trains passing the nearby railway tracks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Screen annotation and highlighting
&lt;/h3&gt;

&lt;p&gt;To highlight what I am doing on the screen, I use a handful of different apps and setups.&lt;/p&gt;

&lt;p&gt;At simplest I increase the zoom level of windows to about 125-150% depending on how they look. This is generally only possible with browser windows or Electron-based applications (which I typically dislike, but in this case brings a positive). With other applications it can be a mix of using the macOS screen zoom, or just not having a very readable interface. Fortunately the main applications I show are browser windows, &lt;a href="https://code.visualstudio.com"&gt;Visual Studio Code&lt;/a&gt; (Electron-based), and &lt;a href="https://iterm2.com/"&gt;iTerm&lt;/a&gt;, which is highly customisable, and I have a profile specifically designed for live-streaming (I should release that 🤔) that increases font size, removes background transparency, and makes things clearer for people to read.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1XGU0SY6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/presentify-632-878bc202f.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1XGU0SY6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/presentify-632-878bc202f.jpeg" alt="Presentify"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I use a small application called “&lt;a href="https://presentify.compzets.com/"&gt;Presentify&lt;/a&gt;” to highlight my cursor, and to annotate the screen (which I do rarely, but it’s nice to have both in the same application).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vD93kR0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/keycastr-493-cf5ad32c8.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vD93kR0C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/keycastr-493-cf5ad32c8.jpeg" alt="Keycastr"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I occasionally use another small application called “&lt;a href="https://github.com/keycastr/keycastr"&gt;KeyCastr&lt;/a&gt;” to show the keyboard shortcuts I’m using, but that’s something I use more when recording tutorial videos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Control
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--klHKBcFf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210625_143818-800-cd33f577f.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--klHKBcFf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chrischinchilla.com/generated/images/IMG_20210625_143818-800-cd33f577f.jpeg" alt="Streamdeck Mini"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have a &lt;a href="https://www.amazon.com/gp/product/B07DYRS1WH/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;amp;tag=gregamamma-20&amp;amp;creative=9325&amp;amp;linkCode=as2&amp;amp;creativeASIN=B07DYRS1WH&amp;amp;linkId=290a17131e5a29d4ca5bfc8780cf8f77"&gt;Streamdeck mini&lt;/a&gt; with a bunch of useful shortcuts for OBS, or whatever other application I’m using setup so I can trigger changes without having to use the mouse. If you don’t want to invest in hardware, they also have a pretty good mobile application to do the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  That’s a wrap!
&lt;/h2&gt;

&lt;p&gt;That’s about it for hardware, software, and tools. Making, editing, and distributing videos, podcasts, and livestreams is a whole other discussion that I am not even sure I could put into blog post form, but maybe one day. I’ve mentioned in places throughout this post what I’d like to change and tweak about my setup, so I plan to revisit and keep this post updated as I do so.&lt;/p&gt;

&lt;p&gt;I hope you found this useful and I’d love to hear your comments and setup tips in the comments.&lt;/p&gt;

</description>
      <category>video</category>
      <category>audio</category>
      <category>podcast</category>
      <category>livestreaming</category>
    </item>
    <item>
      <title>What’s new for documentarians in Snagit 2021</title>
      <dc:creator>Chris Chinchilla</dc:creator>
      <pubDate>Fri, 18 Dec 2020 10:54:04 +0000</pubDate>
      <link>https://dev.to/chrischinchilla/what-s-new-for-documentarians-in-snagit-2021-1jad</link>
      <guid>https://dev.to/chrischinchilla/what-s-new-for-documentarians-in-snagit-2021-1jad</guid>
      <description>&lt;p&gt;I ran a live stream a while back where I looked at new features in Snagit 2021 and how they can help those writing technical documentation or explanatory content generate great screenshots. And then I had a deadline, another deadline, and all sorts of other chaos, and somehow I only got around the blog post to accompany that livestream… now. Well, on the positive, in the meantime, I have had much more time to get my hands on Snagit 2021 in a real-world context. So, if you need to create, edit, and manage screenshots, how can Snagit 2021 help you?&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/t6zK1Shn8xc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  When to upgrade your screenshot tool
&lt;/h2&gt;

&lt;p&gt;Wait! I hear you say. I can already take screenshots with macOS/Windows/Linux/command line/some other random tool! Why do I need Snagit? That's a good question, and there is a possibility you don't need something like Snagit in your screenshot toolchain. Here are some reasons you might want to consider an upgrade to your screenshot tool if you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Need better management of screenshots beyond keeping them in folders in your desktop environment&lt;/li&gt;
&lt;li&gt;  Want to be able to edit changes, overlays, and annotations to a screenshot&lt;/li&gt;
&lt;li&gt;  Want to remove window chrome or simplify other UI elements from a screenshot&lt;/li&gt;
&lt;li&gt;  Need to take screenshots larger than a window allows&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Themes
&lt;/h2&gt;

&lt;p&gt;Frequently when using other screenshot or image editing tools, you forget which style of annotation you used last time. Was it an arrow or a line? And what color or font did you use?&lt;/p&gt;

&lt;p&gt;Snagit 2021 lets you create multiple themes that define the colors (up to 8) and style of annotation elements that you can toggle between for relevant screenshots. You can export and share these themes around a team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4Es3F6sl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/oh7kwrb3v3k5g9mf4m72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4Es3F6sl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/oh7kwrb3v3k5g9mf4m72.png" alt="Creating a theme"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dDBm2PZL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/09w7ho5j0n0oowed13i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dDBm2PZL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/09w7ho5j0n0oowed13i0.png" alt="Creating a theme"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Simplified user interface (SUI)
&lt;/h2&gt;

&lt;p&gt;Coined and created by TechSmith, the SUI concept has grown beyond the company. It refers to the abstracted interface you often see (more) in the documentation for GUI-driven applications to remove distractions from the concept you are currently explaining. A SUI is not relevant in every use case, and sometimes a cluttered user interface looks cluttered no matter how much you abstract it.&lt;/p&gt;

&lt;p&gt;Snagit 2021 brings tools that attempt to generate a SUI automatically from a screenshot and new tools to touch up that automatic generation.&lt;/p&gt;

&lt;p&gt;Below is an auto simplified screenshot from Discord and Spark on the Mac. As you can see, there's a little work needed to make the SUI useful. For Spark and Discord, I removed the simplification from the main interface elements and left it for the emails and messages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M5OBmr4d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pq4x3odz5y4v7rm9bl2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M5OBmr4d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pq4x3odz5y4v7rm9bl2f.png" alt="Discord simplified"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_cajRdz4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7vwaalixojhdnjqpy5rk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_cajRdz4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7vwaalixojhdnjqpy5rk.png" alt="Spark simplified"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating animations from images
&lt;/h2&gt;

&lt;p&gt;I have typically used Snagit to create animated GIFs from video recordings of my screen (Snagit is one of the few tools that makes this easier), and Snagit 2021 brings new features to do the opposite, create videos and animated GIFs from static images. You can add a voiceover as you step through the images created, flip back and forth between the images and a webcam, add a background, and trim and cut segments. I will probably still record animated GIFs and video myself as it feels more "natural," but it's a good way to repurpose images you already have.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ghZ8Tfo4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/g32d1x5154yz7g8rel96.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ghZ8Tfo4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/g32d1x5154yz7g8rel96.gif" alt="Animated gif of images"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Combining images into instructional handouts
&lt;/h2&gt;

&lt;p&gt;I have always worked documenting developer software, so I never had much of a need to create one-pager type docs that people can print out or send to customers. But if you do work documenting that kind of product, then Snagit 2021 offers a set of templates (and you can create your own) that you drag images to and add any relevant text. The fixed page sizing can be limiting, and as you can see from the rough example I assembled below, you need to ensure that the images fit the spaces predefined for them. I have no real need for this feature, but I can support and customer success teams finding it useful to hand to customers to handle common questions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X83791gT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3swlzr9e6m3fngigvn40.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X83791gT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/3swlzr9e6m3fngigvn40.png" alt="Template of images"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Management
&lt;/h2&gt;

&lt;p&gt;In addition to your manual organization and tagging of screenshots, Snagit 2021 automatically organizes images for you by application and source (for example, animated GIF, from a template, etc.). I have found this increasingly useful to filter images I forgot I had taken.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vg3LN21f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/npd3rkkor0q6c25cki4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vg3LN21f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/npd3rkkor0q6c25cki4x.png" alt="Management sidebar"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Miscellaneous
&lt;/h2&gt;

&lt;p&gt;Not necessarily related to the 2021 release, but other features I frequently find useful are the following.&lt;/p&gt;

&lt;h3&gt;
  
  
  Presets for screenshots
&lt;/h3&gt;

&lt;p&gt;Taking a screenshot or recording in Snagit allows for a dizzying array of effects, parameters, and post-processing options. If there are combinations you use frequently, you can assign them to a present, and even better, trigger a preset from a custom keyboard shortcut.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--70k9yooU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o9d0h9qtedjid55zkt48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--70k9yooU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/o9d0h9qtedjid55zkt48.png" alt="Presets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Taking screenshots of menu items
&lt;/h3&gt;

&lt;p&gt;While I found that Snagit couldn't handle some menus (probably more due to however the application was programmed), it's generally straightforward to grab a neatly isolated screenshot of a menu by selecting the appropriate &lt;em&gt;selection&lt;/em&gt; option from the Snagit Capture panel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Share options
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8ac5IKy_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wyohlaedbk8ifgukqs9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8ac5IKy_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wyohlaedbk8ifgukqs9v.png" alt="Share options"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inter-application sharing extensions are one of my favorite features of macOS that few developers leverage. Snagit supports that feature and also adds a plethora of customizable other sharing destinations that you use frequently.&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>techwriting</category>
      <category>screenshots</category>
    </item>
    <item>
      <title>The Weekly Squeak — KubeCon 2020</title>
      <dc:creator>Chris Chinchilla</dc:creator>
      <pubDate>Mon, 21 Sep 2020 10:28:02 +0000</pubDate>
      <link>https://dev.to/chrischinchilla/the-weekly-squeak-kubecon-2020-171l</link>
      <guid>https://dev.to/chrischinchilla/the-weekly-squeak-kubecon-2020-171l</guid>
      <description>&lt;p&gt;KubeCon 2020 special today! So just head over to the podcast or video links to hear more.&lt;/p&gt;

&lt;p&gt;xx Chinch&lt;/p&gt;

&lt;h2&gt;
  
  
  Watch and listen
&lt;/h2&gt;

&lt;p&gt;Watch and listen to this newsletter below including my interviews with Vijoy Pandey (Cisco), Amith Nair (HashiCorp), and Michael Friedrich (GitLab).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://anchor.fm/theweeklysqueak/episodes/KubeCon-EU-2020-einrji/a-a3268rk?utm_campaign=The%20Weekly%20Squeak&amp;amp;utm_medium=email&amp;amp;utm_source=Revue%20newsletter"&gt;Listen to the podcast episode&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/qlzOcKn-SAg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>podcast</category>
      <category>devops</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>The Weekly Squeak — Tanmai Gopal of Hasura</title>
      <dc:creator>Chris Chinchilla</dc:creator>
      <pubDate>Mon, 14 Sep 2020 19:40:28 +0000</pubDate>
      <link>https://dev.to/chrischinchilla/the-weekly-squeak-tanmai-gopal-of-hasura-5f54</link>
      <guid>https://dev.to/chrischinchilla/the-weekly-squeak-tanmai-gopal-of-hasura-5f54</guid>
      <description>&lt;p&gt;I’m back after a long break!&lt;/p&gt;

&lt;p&gt;This issue speak with Tanmai Gopal about Hasura, an open source and hosted platform that brings instant GraphQL APIs to your data.&lt;/p&gt;

&lt;p&gt;Also features my weekly round up of geeky news including the best game consoles ever, GPT-3, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Full interview
&lt;/h2&gt;

&lt;p&gt;Video coming next week, &lt;a href="https://anchor.fm/theweeklysqueak/episodes/Tanmai-Gopal-of-Hasura---Instant-GraphQL-APIs-for-your-data-ei1hib"&gt;for now you can hear the full interview with Tanmai Gopal&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Weekly squeaks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---Tg457UV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2AjW66gqu4HDysroXz" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---Tg457UV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2AjW66gqu4HDysroXz" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.webdesignerdepot.com/2020/07/in-memory-of-flash-1996-2020/?utm_campaign=The%20Weekly%20Squeak&amp;amp;utm_medium=email&amp;amp;utm_source=Revue%20newsletter"&gt;In Memory of Flash: 1996–2020&lt;/a&gt; — &lt;a href="https://www.webdesignerdepot.com/2020/07/in-memory-of-flash-1996-2020/"&gt;www.webdesignerdepot.com&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We are gathered here today…. Today I write in memory of Adobe Flash (née Macromedia), something that a bunch of people are actually too young to remember. I write this with love, longing, and a palpable sense of relief that it’s all over.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.theguardian.com/games/2020/jul/16/the-25-greatest-video-game-consoles-ranked?utm_campaign=The%20Weekly%20Squeak&amp;amp;utm_medium=email&amp;amp;utm_source=Revue%20newsletter"&gt;The 25 greatest video game consoles — ranked!&lt;/a&gt;
&lt;/h3&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.theguardian.com/games/2020/jul/16/the-25-greatest-video-game-consoles-ranked"&gt;www.theguardian.com&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;On the threshold of a new console generation with the PlayStation 5 and Xbox Series X, here are the industry’s most influential and impactful machines over 50 years of gaming by 25.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pnANwp2G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2AZB5bvycv0ppaSAcp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pnANwp2G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2AZB5bvycv0ppaSAcp" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/?utm_campaign=The%20Weekly%20Squeak&amp;amp;utm_medium=email&amp;amp;utm_source=Revue%20newsletter"&gt;OpenAI’s new language generator GPT-3 is shockingly good — and completely mindless&lt;/a&gt; — &lt;a href="https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/"&gt;www.technologyreview.com&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;The AI is the largest language model ever created and can generate amazing human-like text on demand but won’t bring us closer to true intelligence. “Playing with GPT-3 feels like seeing the future,” Arram Sabeti, a San Francisco–based developer and artist, tweeted last week.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_rKfV-py--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2AgrodkhD_fFgkf7Wj" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_rKfV-py--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2AgrodkhD_fFgkf7Wj" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.oreilly.com/radar/automated-coding-and-the-future-of-programming/?utm_campaign=The%20Weekly%20Squeak&amp;amp;utm_medium=email&amp;amp;utm_source=Revue%20newsletter"&gt;Automated Coding and the Future of Programming&lt;/a&gt; — &lt;a href="https://www.oreilly.com/radar/automated-coding-and-the-future-of-programming/"&gt;www.oreilly.com&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;At Microsoft’s Build conference, Microsoft CTO Kevin Scott talked about an experimental project in which an AI, trained on code in GitHub, actually creates programs: it generates function bodies based on a descriptive comment and a message signature. (Skip to 29:00 of the video.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5h0UrSwf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2Aoj-rtkZqg2DsP8JB" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5h0UrSwf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2Aoj-rtkZqg2DsP8JB" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://arr.am/2020/07/09/gpt-3-an-ai-thats-eerily-good-at-writing-almost-anything/?utm_campaign=The%20Weekly%20Squeak&amp;amp;utm_medium=email&amp;amp;utm_source=Revue%20newsletter"&gt;GPT-3: An AI that’s eerily good at writing almost anything&lt;/a&gt; — &lt;a href="https://arr.am/2020/07/09/gpt-3-an-ai-thats-eerily-good-at-writing-almost-anything/"&gt;arr.am&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;I got access the the OpenAI GPT-3 API and I have to say I’m blown away. It’s far more coherent than any AI language system I’ve ever tried. All you have to do is write a prompt and it’ll add text it thinks would plausibly follow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iPdngpMV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2AH0PA7gjESvoDuvjr" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iPdngpMV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2AH0PA7gjESvoDuvjr" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.wired.co.uk/article/silicon-roundabout-tech-city-property?utm_campaign=The%20Weekly%20Squeak&amp;amp;utm_medium=email&amp;amp;utm_source=Revue%20newsletter"&gt;How London’s Silicon Roundabout dream turned into a nightmare&lt;/a&gt; — &lt;a href="https://www.wired.co.uk/article/silicon-roundabout-tech-city-property"&gt;www.wired.co.uk&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Old Street roundabout resounded with the merry tunes of mariachi songs. It was September 2017 and WeWork was trying to lure clients from rival coworking firm The Office Group with a savvy blend of membership discounts and Mexican music. That was in the before times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CzdsMfuh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2AGzmKyLmOfdtgckSx" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CzdsMfuh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/0%2AGzmKyLmOfdtgckSx" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://www.mailing.dzone.com/click.html?lc=UMd&amp;amp;mc=b&amp;amp;s=f9os&amp;amp;u=f&amp;amp;utm_campaign=The%20Weekly%20Squeak&amp;amp;utm_medium=email&amp;amp;utm_source=Revue%20newsletter&amp;amp;x=a62e&amp;amp;z=osaJFer"&gt;Top 13 GitHub Alternatives in 2020 [Free and Paid]&lt;/a&gt; — &lt;a href="https://www.mailing.dzone.com/click.html?x=a62e&amp;amp;lc=UMd&amp;amp;mc=b&amp;amp;s=f9os&amp;amp;u=f&amp;amp;z=osaJFer&amp;amp;"&gt;www.mailing.dzone.com&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;If you are looking for a reliable yet powerful GitHub alternative, this article unveils the to GitHub alternatives you can find in today’s market.&lt;/p&gt;

</description>
      <category>api</category>
      <category>graphql</category>
      <category>ai</category>
      <category>news</category>
    </item>
    <item>
      <title>Reducing Support Overload with an Einstein-Powered Chatbot</title>
      <dc:creator>Chris Chinchilla</dc:creator>
      <pubDate>Tue, 01 Sep 2020 07:44:26 +0000</pubDate>
      <link>https://dev.to/chrischinchilla/reducing-support-overload-with-an-einstein-powered-chatbot-4ek</link>
      <guid>https://dev.to/chrischinchilla/reducing-support-overload-with-an-einstein-powered-chatbot-4ek</guid>
      <description>&lt;p&gt;Chatbots have a variety of use cases. One of the more common uses is to help reduce repetitive customer service work, enabling human agents to focus on more complex and personal tasks. In this tutorial, I create a basic bot for a small company that assists the customer support team. The bot can answer a selection of common questions about a fictional software application. The bot uses natural language processing (NLP) to recognize certain questions and respond appropriately, directing the user to a human support agent if they ask, or the bot is unable to understand or answer.&lt;/p&gt;

&lt;p&gt;There are a lot of platforms available for creating bots, but I decided to try &lt;a href="https://developer.salesforce.com/einstein"&gt;Einstein from Salesforce&lt;/a&gt;, as it can integrate with Salesforce data and workflows, which are commonly used by customer service teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Einstein Chatbot
&lt;/h2&gt;

&lt;p&gt;Einstein is AI for the Salesforce Platform, providing infrastructure for creating predictive models to interact with Salesforce data. This includes analytics, text, and image analysis, as well as a bot platform that combines text analysis and Salesforce workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Community and Add Chat
&lt;/h2&gt;

&lt;p&gt;I wanted to add a bot to an external site I had set up with Heroku, but following the steps for creating and adding it to a &lt;a href="https://trailhead.salesforce.com/en/content/learn/modules/service-cloud-platform-efficiency/create-self-service-communities-and-portals"&gt;Salesforce Community&lt;/a&gt; seemed to be the easiest and fastest way to see what was possible as I didn’t need to set up a custom server or whitelisting.&lt;/p&gt;

&lt;p&gt;The first step is to create the community and add the chat capabilities that my bot will use to talk to the customers. I used &lt;a href="https://trailhead.salesforce.com/en/content/learn/projects/build-an-einstein-bot/prep-for-einstein-bots?trail_id=service_einstein"&gt;this Trailhead module&lt;/a&gt; as a guide. For my specific case, I called my community “Customer Support,” and chose a domain that suited, &lt;em&gt;&lt;a href="https://acme-users-developer-edition.um6.force.com/support"&gt;https://acme-users-developer-edition.um6.force.com/support&lt;/a&gt;&lt;/em&gt;. I also changed some of the settings to “Acme Support” to suit my use case, and added my domain to the &lt;em&gt;Website URL&lt;/em&gt; step.&lt;/p&gt;

&lt;p&gt;When you add the embedded chat to your community components, make sure you select the correct &lt;em&gt;Chat Deployment&lt;/em&gt; and configure its look to suit your use case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3ivU_O24--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/ae04b0607e0e98e0d1305b8603094ecf435a0e509d1ef581.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3ivU_O24--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/ae04b0607e0e98e0d1305b8603094ecf435a0e509d1ef581.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to add the bot to a web page of your own, instead of creating a community for the bot, create a “web chat” button (following the same steps mentioned &lt;a href="https://trailhead.salesforce.com/en/content/learn/projects/build-an-einstein-bot/prep-for-einstein-bots#Tdxn4tBK-heading5"&gt;in the Trailhead module&lt;/a&gt; above), then follow &lt;a href="https://trailhead.salesforce.com/en/content/learn/modules/web-chat/web_chat_get_started"&gt;this Trailhead module&lt;/a&gt; to get started. &lt;/p&gt;

&lt;p&gt;At the end of the webchat flow in the trailhead, the module is a code snippet that you can paste into your web page (including Apex pages) to add your bot. The flow for creating an Einstein-powered bot is the same, regardless if you are implementing the bot on a Salesforce community or your custom site. &lt;/p&gt;

&lt;p&gt;Wherever you decide to host the bot, update the values in the code snippet to make sure you add the domain to the Website_ URL_ text field, for example, "&lt;a href="https://acme-computers.herokuapp.com/"&gt;https://acme-computers.herokuapp.com/&lt;/a&gt;". You can customize the bot experience by changing the CSS and JavaScript values, or adding custom JavaScript values using the &lt;code&gt;embedded_svc.settings.extraPrechatFormDetails&lt;/code&gt; and &lt;code&gt;embedded_svc.settings.extraPrechatInfo&lt;/code&gt; parameters. Use &lt;code&gt;extraPrechatFormDetails&lt;/code&gt; to send additional information to the chat transcripts, and &lt;code&gt;extraPrechatInfo&lt;/code&gt; to map those values to new or existing records in Salesforce. Find more details in &lt;a href="https://developer.salesforce.com/docs/atlas.en-us.snapins_web_dev.meta/snapins_web_dev/snapins_web_prechat_details.htm"&gt;the documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Einstein Bot
&lt;/h2&gt;

&lt;p&gt;Now to the interesting part, adding and configuring a bot with Einstein. To get started I followed &lt;a href="https://trailhead.salesforce.com/content/learn/projects/build-an-einstein-bot/set-up-an-einstein-bot?trail_id=service_einstein"&gt;this Trailhead module.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Creating a bot requires a degree of pre-planning and consideration for how a user might interact with a bot, and the various types of questions and requests they might have for it. It’s worth thinking through how your customers currently interact with human support agents and find ways to create parallels with a bot. A bot should also have an element of personality, and getting that personality right requires thinking about your current business branding, and when people might interact with your bot. For example, the tone of voice may need to change, depending on the user's situation. For example, it may be more appropriate for the bot to use less humor when dealing with a serious problem rather than when greeting a user for the first time. &lt;a href="https://help.salesforce.com/articleView?id=bots_service_best_practice.htm&amp;amp;type=5"&gt;The Salesforce docs&lt;/a&gt; provide additional resources you can read.&lt;/p&gt;

&lt;p&gt;This example Acme support bot is designed to help people experiencing problems with a simple piece of software that lets people log in to an account, and upload particular files.&lt;/p&gt;

&lt;p&gt;You can see the initial settings I added below. For the menu items, I added two of the common problem areas people have: Login Issues and Upload Issues, plus several other general options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jrzaTr8U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/8f74777535a8cafe664dd138dbc6706b03f01410b514a487.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jrzaTr8U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/8f74777535a8cafe664dd138dbc6706b03f01410b514a487.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
Overall settings for the bot



&lt;h2&gt;
  
  
  Build Einstein Bot
&lt;/h2&gt;

&lt;p&gt;With the bot in place, you can start making it suit the use case. First, here's an overview of the bot builder sections and what you can use them to change.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;em&gt;Overview&lt;/em&gt;: Basic bot information and settings. It’s worth checking that there is a deployment defined in the &lt;em&gt;Channels&lt;/em&gt; section. If you have followed similar steps so far, this is probably already set to the values from the earlier chat setup steps. In this section, you can also set the kind of information you want to store between sessions.&lt;/li&gt;
&lt;li&gt;  &lt;em&gt;Dialogs (see above screenshot)&lt;/em&gt;: Define the potential interaction points a user has with a bot. For example, this can include the different types of questions the bot asks the user to prompt discussion.&lt;/li&gt;
&lt;li&gt;  &lt;em&gt;Entities&lt;/em&gt;: The types of data that you want to collect from a user. For example, you can collect customer details, more information about technical issues, or purchasing preferences.&lt;/li&gt;
&lt;li&gt;  &lt;em&gt;Variable&lt;/em&gt;: A container that stores a specific piece of data collected from the user. These are where you store the values of entities you defined, or Salesforce defines for you.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, I'm going to look in detail at the most interesting part, the dialogs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Dialogs
&lt;/h3&gt;

&lt;p&gt;There are a handful of pre-defined functions for some of the dialogs you can see on the left-hand menu. The house icon shows that this is the default dialog the bot starts from, and the three horizontal lines show this is the main menu a user can always return to. There are also default dialogs for ending a chat, transferring to a human, or when the bot doesn’t understand (defaults to “Confused”).&lt;/p&gt;

&lt;p&gt;In the right-hand panel of each dialog, you can define the initial message and the next steps, such as a request for more details.&lt;/p&gt;

&lt;p&gt;To use Einstein with a bot, you need to click the &lt;em&gt;Enable Dialog Intent&lt;/em&gt; button on the top right of the main panel. Then, click the &lt;em&gt;Dialog Intent&lt;/em&gt; pane where you start adding “Utterances”. Utterances are the ways a user might ask a bot a particular question. Once you have added a minimum of 20, you can enable Einstein from the toggle above. With Einstein disabled, a bot can only handle exact matches to the utterances; with Einstein, it can infer the question. Once a bot has matched an utterance, it switches to the corresponding dialog.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zNzqiBMm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/2620a8ca6ea4cf72da53c3dd2a5b764d99280ca2386d00cc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zNzqiBMm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/2620a8ca6ea4cf72da53c3dd2a5b764d99280ca2386d00cc.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bs6azjei--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/8cb4b096b2569bc900b685ae88491afaaf33d1b18a8053bb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bs6azjei--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/8cb4b096b2569bc900b685ae88491afaaf33d1b18a8053bb.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this example, I added utterances for the login issues and upload issues dialogs. If Einstein finds a match, then the bot displays a message and presents a new menu item that either takes a customer to somewhere for further information or directs them to a service representative. In a production chatbot, you probably make this more complex.&lt;/p&gt;

&lt;p&gt;After your bot receives input, you can trigger the next steps based on that input, which can be various options including asking further questions, redirecting to other dialogs, call Salesforce Flow or Apex objects, or rules that trigger different combinations of the above based on the user's next steps.&lt;/p&gt;

&lt;p&gt;For example, once my bot identifies that the user is having login issues it asks if the user has an account, or has forgotten their password, storing their answers and redirecting them to a human agent for help. I could have triggered an Apex or Workflow instead. I used static choices, but you can also populate these choices from Salesforce objects, or send a user into a new account or password reset flow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RFq5hUUK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/47302c32d747c5df0db831273021073c04addc3d9f2be33f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RFq5hUUK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/47302c32d747c5df0db831273021073c04addc3d9f2be33f.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6d4iD_4O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/2c984796334853c46cd22635f9266d345ad711bd2cab8841.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6d4iD_4O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/2c984796334853c46cd22635f9266d345ad711bd2cab8841.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can output the data that a user-entered to confirm it with them. For example, after understanding that a user is having issues with uploading a file, the bot then understands that the user is using macOS, and asks what version they are using. If they select a value, the bot repeats it. If not, the bot provides instructions on how to find the version and asks again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tl9ZYjYl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/401ceddc889b79547c684c70198c612b3c4d8141db217c0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tl9ZYjYl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/401ceddc889b79547c684c70198c612b3c4d8141db217c0d.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZbVoiFRW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/5584557c5cf458a97427cbf13c2999bd8738cb5d7fa417e0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZbVoiFRW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/5584557c5cf458a97427cbf13c2999bd8738cb5d7fa417e0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To improve the user's bot experience, consider customizing the “Confused” dialog text. Below I changed the text and presented a menu, showing the general issues users experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---NJCO93V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/889a31257aa24f14c1de89529ef9e6ec4b75d3837f4c97ea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---NJCO93V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/889a31257aa24f14c1de89529ef9e6ec4b75d3837f4c97ea.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Preview Bot
&lt;/h2&gt;

&lt;p&gt;You can test the Einstein bot by first clicking the &lt;em&gt;Activate&lt;/em&gt; and &lt;em&gt;Preview&lt;/em&gt; buttons. Note that you can’t make changes to the bot while it is active. Select the appropriate &lt;em&gt;Embedded Service Channel&lt;/em&gt;, fill in contact details, and test your utterances, intents, and dialogs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EM8pPNQ0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/animations/7a7ef3d789d71d6ed214d3bbbd731abd41107e5249a90d70.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EM8pPNQ0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/animations/7a7ef3d789d71d6ed214d3bbbd731abd41107e5249a90d70.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0K43gd_M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/animations/0a1d8cd58639bef693965d932ad54a08a0f6e18a736cde39.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0K43gd_M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/animations/0a1d8cd58639bef693965d932ad54a08a0f6e18a736cde39.gif" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Activate Bot
&lt;/h2&gt;

&lt;p&gt;When the bot is ready, click the &lt;em&gt;Activate&lt;/em&gt; button, switch back to the community you created earlier, and switch the community into &lt;em&gt;Preview&lt;/em&gt; mode. Now you can chat with the bot as a typical user would.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Z78DKkn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/a9f7da9d4ee7a9c6645c17d1d02c463cade629daf3b129d0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Z78DKkn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://39296.cdn.cke-cs.com/xnX5w1TWo7hQQOywFbkx/images/a9f7da9d4ee7a9c6645c17d1d02c463cade629daf3b129d0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;
Chatting with a live bot



&lt;h2&gt;
  
  
  Training a Bot Over Time
&lt;/h2&gt;

&lt;p&gt;No matter how much you train it beforehand, when you deploy your bot live after further tweaks and configuration, people will use your bot in ways you didn’t anticipate. There are two ways you can debug problems and adapt to your bot.&lt;/p&gt;

&lt;p&gt;The first is the &lt;em&gt;Performance&lt;/em&gt; page. Here you can see event transcripts of past sessions and the events that took place.&lt;/p&gt;

&lt;p&gt;The second is the &lt;em&gt;Model Management&lt;/em&gt; page, which shows you how well your utterances are performing with users and utterances they are using instead/in addition to. From this page, you can add more utterances, and retrain the language model based on common usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, I showed how to add Salesforce's Einstein bot platform to your customer service toolset, and some of the configuration and features you can use to create a bot experience for your customers. I covered how to add an Einstein chatbot to a Salesforce or external webpage, and how to customize values passed to the bot from those webpages. &lt;/p&gt;

&lt;p&gt;We have all used chatbots that leave us frustrated, confused, and wanting to reach a human. Creating the user experience behind a chatbot to make it usable, valuable, and personal is the hardest part of the process. I hope using a platform that integrates directly with customer data helps you to create the experience your customers are looking for.&lt;/p&gt;

</description>
      <category>chatbot</category>
      <category>ai</category>
      <category>einstein</category>
      <category>nlp</category>
    </item>
    <item>
      <title>Create a Random Board Game Generator Using Microservices on Heroku</title>
      <dc:creator>Chris Chinchilla</dc:creator>
      <pubDate>Wed, 17 Jun 2020 15:06:55 +0000</pubDate>
      <link>https://dev.to/chrischinchilla/create-a-random-board-game-generator-using-microservices-on-heroku-270j</link>
      <guid>https://dev.to/chrischinchilla/create-a-random-board-game-generator-using-microservices-on-heroku-270j</guid>
      <description>&lt;h2&gt;
  
  
  Why Microservices
&lt;/h2&gt;

&lt;p&gt;Traditionally, development teams built applications in one large codebase. This technique suited the way teams worked and what their users needed, but the modern user demands reliable, fast responses and near-constant updates to applications. The developers behind these applications want to try new techniques, tools, and languages to see if they improve a user's experience. Meeting these needs is difficult when an entire application is inside one large, tightly coupled codebase, often referred to as a "monolithic" application. As a result, microservices — breaking individual application components into smaller, self-contained "micro" services — has emerged as an alternative to this monolithic architecture. These microservices generally communicate with each other via standard APIs and run in containers that package applications and their dependencies into recreatable and scalable self-contained units.&lt;/p&gt;

&lt;p&gt;One major benefit to microservices is that if your service experiences increased demand, you can add more instances to cope. You can then reduce them again when no longer needed, keeping your infrastructure needs and costs as efficient as necessary. This ability to add instances of a service with ease also means that you can upgrade or update services in a microservices architecture without any downtime for your entire application. Of course, microservices bring their own challenges as well. For a more in-depth exploration of the advantages and challenges with microservices, &lt;a href="https://blog.heroku.com/why_microservices_matter"&gt;read this article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this article, I'm going to walk you through how to use Heroku as a way to deploy and become comfortable with microservices and then close with best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Services
&lt;/h2&gt;

&lt;p&gt;To show how to set up a microservices-based application on Heroku, I have a small novelty bot that generates random board game ideas. The application consists of the following components and services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  an NPM package of the lexicon of the bot&lt;/li&gt;
&lt;li&gt;  a series of bots that use the package&lt;/li&gt;
&lt;li&gt;  a Twitter bot that tweets on a schedule&lt;/li&gt;
&lt;li&gt;  a Telegram bot that responds when asked&lt;/li&gt;
&lt;li&gt;  a website that displays a random phrase provided by the bot on every reload&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I won't go too much into the code of each service since they are all small and don't add to the focus of this post, apart from communicating with each other over HTTPS requests (you can find all the code on GitHub). Instead, I focus on showing how to deploy each of these components as a microservice to Heroku and how communication between these services works.&lt;/p&gt;

&lt;h3&gt;
  
  
  NPM Module
&lt;/h3&gt;

&lt;p&gt;At the core of the bot is an npm module that uses the &lt;a href="https://github.com/v21/tracery"&gt;tracery library&lt;/a&gt; to generate a series of random strings based on dictionaries of words. It returns text to the bots in the form of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A {mechanic} game, where you are {you_are} {doing} {with_what} in {in}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;which results in something like:&lt;/p&gt;

&lt;p&gt;"A Role playing game, where you are Researchers Growing Great Old Ones in A Post apocalyptic world"&lt;/p&gt;

&lt;p&gt;The module is not a microservice or deployed to Heroku, but a utility library hosted on npm. If you are interested in looking more into the code, &lt;a href="https://github.com/ChrisChinchilla/Boardgame-jerk-generator"&gt;have a look at the GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  API Service
&lt;/h3&gt;

&lt;p&gt;In theory, I could use the npm module in all JavaScript-based services, but that's not microservice-friendly, and it restricts the programming languages I could use in the long run. To solve this, the first service is an HTTP API that wraps the module and returns a string. It's a small node.js application that uses Express.js. &lt;a href="https://github.com/ChrisChinchilla/Boardgame-Jerk/tree/feature/api"&gt;You can find the full code on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To deploy the service to Heroku, you need first to create a new Heroku project and git remote using the &lt;a href="https://devcenter.heroku.com/articles/heroku-cli"&gt;Heroku CLI&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;heroku create &lt;span class="o"&gt;{&lt;/span&gt;NAME&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And then commit and push to the Heroku remote:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;git push heroku master
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;If you use &lt;a href="https://devcenter.heroku.com/articles/buildpacks#officially-supported-buildpacks"&gt;a language that Heroku buildpacks support&lt;/a&gt;, pushing to the remote triggers a detection and build process, which, for this application, is more than enough.&lt;/p&gt;

&lt;p&gt;Heroku uses "&lt;a href="https://www.heroku.com/dynos"&gt;dynos&lt;/a&gt;" to run instances of your applications. Dynos have different process types based on whether the task is long-running, one-off, or open to external traffic. The Heroku build process can create these for you depending on your buildpack, but you can also create a &lt;em&gt;Procfile&lt;/em&gt; that explicitly defines what process and command to use to run your app. For this service, create a &lt;em&gt;Procfile&lt;/em&gt; file and add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;web: node ./index.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Since the service accepts external HTTP traffic, you need to use web for the process type, as it is the only type that allows it. &lt;a href="https://devcenter.heroku.com/articles/procfile#the-web-process-type"&gt;Read the Procfile documentation&lt;/a&gt; for more details on the other types.&lt;/p&gt;

&lt;p&gt;If you now visit your &lt;a href="https://dashboard.heroku.com/"&gt;dashboard&lt;/a&gt;, you can see the new application. Click the &lt;em&gt;Open app&lt;/em&gt; button to find the URL, or use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;heroku open
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Telegram bot
&lt;/h3&gt;

&lt;p&gt;The next service powers the telegram bot that generates a random game idea when someone types a certain keyword.&lt;/p&gt;

&lt;p&gt;You can see &lt;a href="https://github.com/ChrisChinchilla/boardgamejerk-telegram"&gt;the full code for the service on Github&lt;/a&gt;. Again, it's a small service and uses the &lt;a href="https://github.com/edisonchee/slimbot"&gt;slimbot&lt;/a&gt; dependency to make calls to the Telegram API.&lt;/p&gt;

&lt;p&gt;The Telegram bot API has a couple of odd quirks. Instead of specifying an endpoint it should listen to requests to and from, your application broadcasts an API key that the Telegram API listens to. You then use the "&lt;a href="https://core.telegram.org/bots#6-botfather"&gt;botfather&lt;/a&gt;" to create a bot, and it gives you the access token.&lt;/p&gt;

&lt;p&gt;As you shouldn't expose that token to the public, use the &lt;a href="https://devcenter.heroku.com/articles/config-vars"&gt;Config Vars&lt;/a&gt; section of an application's preferences to define environment variables passed to the application. You can couple this with a local &lt;em&gt;.env&lt;/em&gt; file for when you are running and testing the application locally. To make it easier to move the service around, you should also create a variable for the host of the API service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5l_dqUbL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bibw896wx6dqzgaly17u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5l_dqUbL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bibw896wx6dqzgaly17u.png" alt="Config vars section of Heroku dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Config vars section of Heroku dashboard&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Add a &lt;em&gt;Procfile,&lt;/em&gt; which is similar to the API service, but runs a different JavaScript file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;web: node ./telegram.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Create a Heroku project and commit and push the code. After the build finishes, your application should be ready and listening for requests from anyone who installed the Telegram bot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pSNLAFJS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6xqt6y1osbfa537vi86m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pSNLAFJS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6xqt6y1osbfa537vi86m.png" alt="Telegram bot example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Telegram bot example&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Twitter bot
&lt;/h3&gt;

&lt;p&gt;The next service powers the Twitter bot that tweets a random game idea on a weekly schedule.&lt;/p&gt;

&lt;p&gt;You can see &lt;a href="https://github.com/ChrisChinchilla/boardgamejerk-twitter"&gt;the full code for the service on Github&lt;/a&gt;. Again, it's a small service and uses the &lt;a href="https://www.npmjs.com/package/twit"&gt;twit&lt;/a&gt; dependency to make calls to the Twitter API.&lt;/p&gt;

&lt;p&gt;Most of the steps for setting it up on Heroku are the same as for the Telegram bot, but in addition to the host variable, you need a different set of Config Vars to authenticate against the Twitter API (&lt;a href="https://developer.twitter.com/en/docs/basics/authentication/overview"&gt;read more in the Twitter documentation&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;To run the bot on a schedule, you need to use the &lt;a href="https://devcenter.heroku.com/articles/scheduler"&gt;Heroku scheduler&lt;/a&gt;. As I want it to run every Wednesday, I define the scheduler job to run the following every day:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%u&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; 3 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then &lt;/span&gt;bin/run_bot_heroku.sh&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And the &lt;em&gt;bin/run_bot_heroku.sh&lt;/em&gt; takes the place of the &lt;em&gt;Procfile&lt;/em&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
node twitter.js
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DasLLBTa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0tmzfcc7oz9bupf4sx93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DasLLBTa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0tmzfcc7oz9bupf4sx93.png" alt="Twitter bot example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Twitter bot example&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Website Service
&lt;/h3&gt;

&lt;p&gt;The final service is the website. This is static HTML, plus a small JavaScript snippet that calls the API service on page load to display an example of the bot output. &lt;a href="https://github.com/ChrisChinchilla/Boardgamejerk-Site/blob/master/index.html"&gt;You can see the full code on GitHub&lt;/a&gt;. Heroku doesn't officially support hosting vanilla HTML (as opposed to HTML generated by Rails etc.), but adding an entire web application is too much for this simple site. The workaround is to create an &lt;em&gt;index.php&lt;/em&gt; file with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt; &lt;span class="k"&gt;include_once&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"index.html"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="cp"&gt;?&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When you create a Heroku project and commit and push the code, the build then detects the PHP file and uses the PHP/Apache buildpack, automatically generating a dyno with a web process type.&lt;/p&gt;

&lt;p&gt;The one negative to this approach is the environment variables, so I hardcoded the URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Znl5C-lq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lf6ofc1k0jp713gkbjyy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Znl5C-lq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lf6ofc1k0jp713gkbjyy.png" alt="Website example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Website example&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That's it. We now have a series of services deployed to Heroku that interact with other external APIs and platforms, but fetch their data from one independent service. If you update one, it doesn't affect the others. We can add further services for different platforms without affecting the others, and change individual services with new features and bug fixes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applying Microservices Best Practices with Heroku
&lt;/h2&gt;

&lt;p&gt;Switching to microservices involves technical and organizational changes, some of which Heroku offers tools to assist.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automation
&lt;/h3&gt;

&lt;p&gt;In this tutorial, I had two different git remotes: one for GitHub, and one for Heroku. &lt;a href="https://devcenter.heroku.com/articles/github-integration"&gt;Instead, you can connect Heroku to a GitHub repository&lt;/a&gt; and automatically build and deploy code straight from a branch. The automatic deployment includes creating preview versions of your apps and &lt;a href="https://devcenter.heroku.com/articles/pipelines"&gt;pipelines&lt;/a&gt;, which are great for testing builds as part of a continuous delivery workflow before deploying to production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring
&lt;/h3&gt;

&lt;p&gt;Managing and analyzing metrics and logs is more complicated with a microservices architecture than with a monolith, as with microservices, you have to source and gather information from multiple sources and aggregate them.&lt;/p&gt;

&lt;p&gt;Heroku helps with this task &lt;a href="https://devcenter.heroku.com/articles/metrics"&gt;by providing aggregated metrics&lt;/a&gt; for applications at a team level. You can use &lt;a href="https://devcenter.heroku.com/articles/logplex"&gt;Logpex&lt;/a&gt; to collate log entries into one API and consume those with another tool, such as Grafana.&lt;/p&gt;

&lt;h3&gt;
  
  
  Maintaining State
&lt;/h3&gt;

&lt;p&gt;Another common complexity with microservices is maintaining state for services as they scale or as they come and go. Heroku offers database and data management services (Postgres, Redis, and Kafka) that &lt;a href="https://devcenter.heroku.com/articles/heroku-postgresql#sharing-heroku-postgres-between-applications"&gt;you can share between services&lt;/a&gt;. This isn't strictly a microservice, but it can help provide a data infrastructure for yours.&lt;/p&gt;

&lt;h3&gt;
  
  
  Team Collaboration
&lt;/h3&gt;

&lt;p&gt;As different teams and team members can work on different application services independently, the use of microservices encourages and requires better communication. In addition to collaborating on code, &lt;a href="https://devcenter.heroku.com/articles/collaborating"&gt;you can add collaborators&lt;/a&gt; to your Heroku applications who can manage and maintain deployments, addons, and admin tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, I created a handful of small services for a hobby project and got them running with minimal time and effort using Heroku. I can now continue to add new services for other bot platforms as I work on support for them and know they continue running with minimal action on my part.&lt;/p&gt;

</description>
      <category>heroku</category>
      <category>microservices</category>
      <category>chatbot</category>
      <category>api</category>
    </item>
    <item>
      <title>Language and understandable writing</title>
      <dc:creator>Chris Chinchilla</dc:creator>
      <pubDate>Mon, 10 Feb 2020 13:32:00 +0000</pubDate>
      <link>https://dev.to/chrischinchilla/language-and-understandable-writing-55p</link>
      <guid>https://dev.to/chrischinchilla/language-and-understandable-writing-55p</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;a href="https://kauri.io/language-and-understandable-writing/01db62a1bdf54c4b99a852fe9700e930/a"&gt;Originally published on Kauri.io&lt;/a&gt;, where developers write, share &amp;amp; learn&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I appreciate that not everyone who writes documentation is a native English speaker, or even if they are, may not understand the best way to write clearly and concisely. Many native English speakers had our last grammar lesson more than 20 years ago, and have learnt the tips and tricks we now use as professional writers recently. There are three important things to remember to justify the time and effort of making your writing more understandable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  English is often a default language for technical documentation, so
your readers are also not native speakers&lt;/li&gt;
&lt;li&gt;  Readers want to trust what you say, and certain language choices can help&lt;/li&gt;
&lt;li&gt;  Readers are trying to understand complex subjects, so every little
thing you do to make that easier is worth it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While you can spend a lot of time crafting the perfect sentence to explain something, here are a handful of tips to add that dash of clarity your need.&lt;/p&gt;

&lt;h2&gt;
  
  
  K.I.S.S.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Or Keep it simple (stupid).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Don't use words that add no value. Words for humour or character are fine (but come with complexity), but words that muddy the explanation (and thus the readers understanding) are a distraction. In English, there are words called &lt;a href="https://en.wikipedia.org/wiki/Weasel_word"&gt;weasel words&lt;/a&gt; that add nothing but syntactical noise. They may be useful in fiction or creative writing, but for explanatory text, they are not useful.&lt;/p&gt;

&lt;p&gt;There are common weasel words that you probably use all the time without thinking, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  very&lt;/li&gt;
&lt;li&gt;  easy&lt;/li&gt;
&lt;li&gt;  just&lt;/li&gt;
&lt;li&gt;  only&lt;/li&gt;
&lt;li&gt;simply&lt;/li&gt;
&lt;li&gt;trivial&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Certain weasel words such as "easy" and "just" are worse than noise, and they can come across as patronising. These are words that people commonly use, but don't think about their implications. If you say something is "easy", and someone spends 3 hours battling with dependency hell, then it wasn't "easy", and your documentation probably annoyed them. If you are lucky enough that your project, developer experience, and documentation is good quality then you don't need to tell people that it's "easy", they will find that out themselves.&lt;/p&gt;

&lt;p&gt;When you edit your first draft (which of course you should!), consider how many words you need to explain a concept. One of the advantages of English is that you can do quite a lot with few words, but many choose to use way more than they need. Continuing the theme of this section, every extraneous word adds syntactical noise that people have to sift through to find the information they need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Show off
&lt;/h2&gt;

&lt;p&gt;There are words that people use to show off how clever they are, which is comparable to creating overly complicated code for no particular reason but to impress people. Remember that often when we write in English for technical copy (and it is often the lowest common denominator language in our sector), we are writing for international audiences, who are reading in their second or third language. Adding complexity for your indulgence is selfish and doesn't help anyone. If a simpler word does the job, use it. Let your code and ideas speak for how smart you are, not your knowledge of obscure English words.&lt;/p&gt;

&lt;h2&gt;
  
  
  Write confidently
&lt;/h2&gt;

&lt;p&gt;My final writing advice is a grammatical one, and something more subtle that I spend a lot of my time changing in the text that I edit, but I feel strongly about it. It's the topic of writing confidently. Confident writing helps people believe that what you say is true and accurate, and doesn't leave them unsure if they should believe what you say. There are a couple of small grammatical tricks to help with this that are sometimes hard to apply and may seem unnecessary but pay off in the end.&lt;/p&gt;

&lt;p&gt;Strangely, the best explanation is from &lt;a href="https://en.wikipedia.org/wiki/On_Writing_(Stephen_King)"&gt;Stephen King's "on writing"&lt;/a&gt;, even though it's not flattering of technical writing.&lt;/p&gt;

&lt;p&gt;The book is over ten years old, times have changed (I hope). In this extract, he starts discussing adverbs or words that modify other words. He then moves on to discussing the passive voice, which I cover next.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I think timid writers like them for the same reason timid lovers like passive partners. The passive voice is safe. There is no troublesome action to contend with; the subject just has to close its eyes and think of England, to paraphrase Queen Victoria. I think unsure writers also feel the passive voice some how lends their work authority, perhaps even a quality of majesty. If you find instruction manuals and lawyers' torts majestic, I guess it does.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My first piece of advice is to active voice as much as possible. This means making it clear who the actor and subject are in each sentence. There are often situations where this isn't possible or relevant but try it as much as possible. It may not be clear what I mean, so here's an example.&lt;/p&gt;

&lt;h3&gt;
  
  
  Passive voice
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Functions can be used to return a value&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Are you sure about that, can they or not?&lt;/p&gt;

&lt;h3&gt;
  
  
  Active voice
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;You can use functions to return a value&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ahh, so I can use them!&lt;/p&gt;

&lt;p&gt;It's a small change, and might not even be that noticeable in isolation, but throughout an entire article, it does make a difference. It clarifies details, and the active voice gives readers more confidence, as the lack of clarity in passive voice can make it seem like you're not sure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Confident words
&lt;/h3&gt;

&lt;p&gt;Another trick alongside this (and an example of English vagueness) is using more confident words elsewhere or removing less confident ones. For example, using "can" instead of "may" or "should", or telling the reader directly what they can do instead of making it sound like it's optional. For example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You should add your key&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;OK, I feel like I probably have to add my key, but it kind of sounds like it's optional.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Add your key&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Right nice and clear, I should add my key.&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>writing</category>
      <category>language</category>
      <category>learning</category>
    </item>
    <item>
      <title>Documentation structure</title>
      <dc:creator>Chris Chinchilla</dc:creator>
      <pubDate>Mon, 10 Feb 2020 13:31:47 +0000</pubDate>
      <link>https://dev.to/chrischinchilla/documentation-structure-mp8</link>
      <guid>https://dev.to/chrischinchilla/documentation-structure-mp8</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;a href="https://kauri.io/documentation-structure/cb1cad8db083475389718cbea3217db2/a"&gt;Originally published on Kauri.io&lt;/a&gt;, a new site where developers write, share &amp;amp; learn&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Documentation structure applies to your documentation as a whole, and to each page. Let's start at the top and work down.&lt;/p&gt;

&lt;p&gt;There are different types of documentation your project might need. The terms I use to describe them below are just the terms I use, and others use different terms. The explanation of what they are is more important than what you decide to call them is up to you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation types
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Getting started
&lt;/h3&gt;

&lt;p&gt;A Getting started guide is often a starting point with your project. It should take people from knowing next to nothing about your project to installing and configuring it, and performing their first interactions with it. The extent of what "first steps" means somewhat depends on your project, but it should be simple enough for anyone to complete, but complicated enough that it shows a semi-realistic use case that highlights the potential of your project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Guides
&lt;/h3&gt;

&lt;p&gt;Guides are a collection of documentation pages that take a user from getting&lt;br&gt;
started to the next steps. These are typically more in-depth around a particular topic or common use case.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reference
&lt;/h3&gt;

&lt;p&gt;If your project has an API, error codes, or other particular components that need a reference, this is the place. If the rest of your documentation tells users how to use your tools to build something, this is the place where you explain what individual tools do. Often you can autogenerate these docs from code or other sources, and that's fine. Anyone digging into this section knows what they are looking for and is looking for specifics on how to use it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Explanation
&lt;/h3&gt;

&lt;p&gt;Perhaps most relevant to the Web3 world is a section for the theoretical underpinnings of the project. This is where you explain your consensus algorithm and encryption methods in depth. Again, not everyone wants or needs to know this information, but certain people will.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation structure
&lt;/h2&gt;

&lt;p&gt;Creating good structure (or information architecture) for documentation can be a complex process, depending on how much documentation you have, the most important information people need to know, and the common pathways and questions they typically have.&lt;/p&gt;

&lt;p&gt;A good starting point is to divide your documentation along the lines of the categories outlined above, and then use feedback and analytics to tweak the structure over time. A typical alternative structure is to group documentation around use cases, and what a user might be trying to do, rather than arbitrary divisions. This doesn't suit all documentation projects, especially tools that a developer can use for nearly limitless applications, but can work well for focussed SaaS products.&lt;/p&gt;

&lt;p&gt;Another aspect to bear in mind is that no matter how much time you spend on creating the perfect organisation and navigation, a majority of readers arrive at your documentation from search engines. Once they arrive, they hopefully continue through the pathways you create, but there is still no guarantee of that. This means that you need to generally assume that someone arrives at a page with no knowledge of anything else in your documentation and you need to tell them what they should know before reading the page they arrive at. You can do this with an explicit pre-requisites section, inline links to concepts and steps, or with an expanding menu that won't always show a reader everything they need to know but does show them where the document sits in the wider structure.&lt;/p&gt;

&lt;p&gt;Finally, if possible, add multiple ways for people to find their way around your documentation, for example, a search box, related content, next steps etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Page structure
&lt;/h2&gt;

&lt;p&gt;Good page structure helps readers read. If a page is a wall of text, it's hard to process, and for people to find the details they are looking for. Good structure helps break up the reading experience, and draws attention to different topic sections, and important pieces of information.&lt;/p&gt;

&lt;p&gt;There's an unexpected bonus to using good page structure, and that's that it doesn't just improve readability for humans, but also for machines. Crawlers from search engines, digital assistants, semantic aggregators and more all have their work assisted by good, predictable page structure that follows best practices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Correct and helpful headings
&lt;/h3&gt;

&lt;p&gt;Headings help readers identify what a particular section covers. Use correct heading hierarchy to indicate topics and sub-topics, but also to improve how machines read and understand the content. For example, to improve SEO.&lt;/p&gt;

&lt;p&gt;This means that a document should only ever have one top-level heading, typically a level one heading unless you are using a generator tool that adds top-level headings from tags or other sources of information.&lt;/p&gt;

&lt;p&gt;Subtitles should be level 2 headings, and any subtopics for those subtitles, level 3, 4 etc. You can use as many of these you need in a document, but be as consistent as possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Images and code examples
&lt;/h3&gt;

&lt;p&gt;As people scroll through a web page, their eyes are drawn to page elements that break up the wall of text. We are especially drawn to images, and developers are drawn to code examples as it's often what they are looking for most.&lt;/p&gt;

&lt;p&gt;The trick is ensuring that important explanatory text is around these elements, so after someone's eyes are drawn to it, they see the surrounding text and (hopefully) read it.&lt;/p&gt;

&lt;p&gt;We cover what makes good images and code examples in other sections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Paragraphs
&lt;/h3&gt;

&lt;p&gt;White space is your friend in breaking up a wall of text, don't fear it. Every major concept, or half a dozen lines or so, start a new paragraph. Even better, if appropriate, add a sub-heading before it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Highlighting and Formatting
&lt;/h3&gt;

&lt;p&gt;Make use of ways to highlight certain important pieces of information with formatting. I have my personal preferences which are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;Code formatting&lt;/code&gt; for anything that is code.&lt;/li&gt;
&lt;li&gt;  &lt;em&gt;Italics&lt;/em&gt; for paths and actions. Many use code formatting for paths,
but that doesn't make sense to me, as it's not code.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Bold&lt;/strong&gt; for important information.&lt;/li&gt;
&lt;li&gt;  Any form of "double" or 'single' quote marks to highlight values to add somewhere, or the traditional usage of quote marks in the English language.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But really what formatting you use for what isn't the important part, it's more important to be consistent if someone expects to see italics to show file paths, then stick to it.&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>writing</category>
      <category>structure</category>
      <category>dx</category>
    </item>
    <item>
      <title>Why write documentation?</title>
      <dc:creator>Chris Chinchilla</dc:creator>
      <pubDate>Mon, 10 Feb 2020 13:30:58 +0000</pubDate>
      <link>https://dev.to/chrischinchilla/why-write-documentation-56po</link>
      <guid>https://dev.to/chrischinchilla/why-write-documentation-56po</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;a href="https://kauri.io/why-write-documentation/203c87d1ee4b4444b0139fe054f28607/a"&gt;Originally published on Kauri.io&lt;/a&gt;, where developers write, share &amp;amp; learn&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's one of the first things you look at when you look at using a new project?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's likely some form of documentation. Whether it is official documentation, or external blog posts, videos, books, or even code comments.&lt;/p&gt;

&lt;p&gt;Ideal documentation should contain everything someone needs to get started with a project without having to read the code.&lt;/p&gt;

&lt;p&gt;In the words of the creator the Perl language, Ken Williams.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Documentation is complete when someone can use your project without having to look at its code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But when was the last time you read documentation that was complete, where everything worked, it was clear, and it addressed your exact questions and needs?&lt;/p&gt;

&lt;p&gt;I don't mean to criticise those who write documentation. It's a hard task. It's hard to keep up to date, it's hard to address every use case and combination of tools that a reader may have, and typically, small projects do not have a dedicated team member handling documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation has many readers
&lt;/h2&gt;

&lt;p&gt;It's not just developers who read documentation. Yes, it's mostly developers, but also developer's colleagues and bosses read it when making decisions about paying for or using software. More crucially, machines read it. Well written documentation means that systems that parse content can also make sense of what you've written, which is especially useful for searching your documentation, and people finding your documentation via search engines.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assume nothing
&lt;/h2&gt;

&lt;p&gt;Assumptions are unhelpful, often inaccurate, and annoy readers. Documentation should remove technical assumptions, and assumptions made about the reader.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical assumptions
&lt;/h3&gt;

&lt;p&gt;Developers often tend to assume that every other developer has the same setup as them, with the same dependencies, and dependency versions. We all know this is not true, and we have all encountered dependency hell, with tangles of (often surprising) dependencies blocking us from installing something until we figure out the problem and what specific packages fix the nightmare.&lt;/p&gt;

&lt;p&gt;When writing documentation, test all your assumptions. This takes longer, but like many things, in the long run, it saves you and your users time. Use tools such as virtual environments (if the language supports them), Docker or virtual machines to test fresh setups, and follow the same process for any operating systems you intend to release for. You can automate much of this work, which you can also use for testing your code, but there's no reason not to tie this code testing and documentation together.&lt;/p&gt;

&lt;h3&gt;
  
  
  About your reader
&lt;/h3&gt;

&lt;p&gt;The next assumption is around who your reader is, and what they may know. We'll cover writing inclusive language later, but in summary, not every reader is like you. 'Developers don't all learn their craft in the same ways. Not all spent 3-4 years studying computer science. Many (possibly like you reading this) learnt through short, intense coding courses or boot camps. These shorter courses often teach students how to code practically, but not so much theory on topics such as design patterns, assembly language, or underlying principles. Documentation is not the place to show off how smart you are and how much you know. It is the place to explain to users how to use your project. If you need to explain complex theory because it is essential to understanding your project, then include it, and supply explanations and background to these concepts. You don't have to write these (unless they don't exist anywhere else), links to quality external resources are fine.&lt;/p&gt;

&lt;p&gt;This is especially an issue in the Web3 space. It's an ecosystem full of new terminology, and (some) new technology, or new interpretations of old technology. There are concepts fundamental to Web3 such as consensus algorithms that are hard to explain, but are not as new or unique as we think, and we can learn from previous efforts to explain them in other ecosystems.&lt;/p&gt;

&lt;p&gt;There are ways to explain complex topics without blinding people. You abstract some detail, but that's fine. We'll look at the best places to explain what level of technical detail in the next post.&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>writing</category>
      <category>learning</category>
      <category>dx</category>
    </item>
  </channel>
</rss>
