<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Doug Sillars</title>
    <description>The latest articles on DEV Community by Doug Sillars (@dougsillars).</description>
    <link>https://dev.to/dougsillars</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dougsillars"/>
    <language>en</language>
    <item>
      <title>Database branching in Django apps using GitHub actions</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Mon, 02 Dec 2024 18:36:27 +0000</pubDate>
      <link>https://dev.to/hackmamba/database-branching-in-django-apps-using-github-actions-3lgh</link>
      <guid>https://dev.to/hackmamba/database-branching-in-django-apps-using-github-actions-3lgh</guid>
      <description>&lt;p&gt;Creating online previews of your applications is a great way to test that all the required functionality is present. When building and testing a dev build of your application from a pull request (PR), the last thing you want is for your tests to affect your production database.  Using a test branch of the production database ensures that the production database remains untouched, ensuring no accidental deletion of data or adding test data into the production database.&lt;/p&gt;

&lt;p&gt;In this post, we’ll create a set of GitHub Actions to automate the testing process of a pull request. Our GitHub Action will run when the pull request is created, generate a test branch of the production database, and deploy the code to Digital Ocean.  Once the PR is merged, a second GitHub Action will rebuild the Digital Ocean app with production code (and database), and the test database branch will be deleted.&lt;/p&gt;

&lt;p&gt;By finishing this article, you’ll be able to automate the creation of app previews using NeonDB branches, Digital Ocean, and Django. Let’s jump right in!&lt;/p&gt;

&lt;h1&gt;
  
  
  Setup
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Code repo
&lt;/h2&gt;

&lt;p&gt;To begin, we’ll use the &lt;a href="https://github.com/evanshortiss/django-neon-quickstart" rel="noopener noreferrer"&gt;Django Neon&lt;/a&gt; &lt;a href="https://github.com/evanshortiss/django-neon-quickstart" rel="noopener noreferrer"&gt;q&lt;/a&gt;&lt;a href="https://github.com/evanshortiss/django-neon-quickstart" rel="noopener noreferrer"&gt;uickstart&lt;/a&gt; repo on GitHub. Create a fork and clone your fork locally.  Follow the setup instructions in the README, and run the application locally to ensure it is up and running. (You’ll need &lt;a href="https://console.neon.tech/" rel="noopener noreferrer"&gt;a NeonDB account&lt;/a&gt; to add the environmental variables that connect to NeonDB and power the app.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Digital Ocean
&lt;/h2&gt;

&lt;p&gt;Digital Ocean has a simplified process for launching applications on its platform. You can be up and running in minutes with just a few configuration steps.&lt;/p&gt;

&lt;p&gt;You’ll need an account at &lt;a href="https://cloud.digitalocean.com/" rel="noopener noreferrer"&gt;Digital Ocean&lt;/a&gt; to deploy your application. From the Digital Ocean dashboard, select “Create” → App Platform.  Connect to GitHub, and choose the django-neon-quickstart, branch main. Click next. I ran this on the $5/month instance. In the “build phase,” click edit and add the three commands below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -r requirements.txt
python manage.py makemigrations
python manage.py migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These are the steps you ran to get the repo running locally—we’re just repeating it on Digital Ocean. &lt;/p&gt;

&lt;p&gt;Update the “Run” command to use gunicorn:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gunicorn django_neon.wsgi:application --bind 0.0.0.0:8000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Finally, update the HTTP Port to 8000 to match the port in the repository.  &lt;/p&gt;

&lt;p&gt;In step two of the setup process, update the environmental variables. You can do a bulk upload and copy and paste in your &lt;code&gt;.env&lt;/code&gt; from the local repo as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjeaks4y1j7kag461eot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbjeaks4y1j7kag461eot.png" width="800" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click through the rest of the commands, and on completion, the app will deploy and be available for use on the internet.&lt;/p&gt;

&lt;p&gt;Note that in this repository the deployed code is the &lt;code&gt;main&lt;/code&gt; branch. We would like to display the PR preview from the &lt;em&gt;dev&lt;/em&gt; branch. (Change the names &lt;em&gt;main&lt;/em&gt; and &lt;em&gt;dev&lt;/em&gt; to whatever branches you wish to preview.)  Let’s continue our setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Secrets
&lt;/h2&gt;

&lt;p&gt;We need to add three GitHub secrets to the repository. Add secrets by clicking “Settings” on the top menu. Then: “Secrets and Variables” → Actions, and add repository secrets.&lt;/p&gt;

&lt;p&gt;Here are the three secrets to be added:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;DO_KEY&lt;/em&gt;&lt;/strong&gt;&lt;strong&gt;:&lt;/strong&gt; A key from Digital Ocean with scopes to create, read, update, and delete apps.  You can create this from the Digital Ocean dashboard under API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;NEON_API_KEY&lt;/em&gt;&lt;/strong&gt;&lt;strong&gt;:&lt;/strong&gt; Create your Neon API key at the NEON dashboard. Click your Avatar in the upper right corner, select Account Settings, and then choose API keys to create an API key.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;NEON_PW&lt;/em&gt;&lt;/strong&gt;&lt;strong&gt;:&lt;/strong&gt; This is the &lt;code&gt;PGPASSWORD&lt;/code&gt; from the .env file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Digital Ocean app spec files
&lt;/h2&gt;

&lt;p&gt;The App Spec file defines how the build process will be run at Digital Ocean. You can find your App Spec file in your App’s dashboard under “Settings.” (It will be autogenerated when you create your project.)&lt;/p&gt;

&lt;p&gt;In your GitHub Repository, create a &lt;code&gt;.do&lt;/code&gt; directory, and make two copies of the App Spec file: &lt;em&gt;app.yaml&lt;/em&gt; and &lt;em&gt;default.yaml&lt;/em&gt;. These will be used by our GitHub Actions to edit the Digital Ocean Application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;app.yaml is what will be provisioned on a pull request being opened:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;alerts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;rule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DEPLOYMENT_FAILED&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;rule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DOMAIN_FAILED&lt;/span&gt;
    &lt;span class="na"&gt;features&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;buildpack-stack=ubuntu-22&lt;/span&gt;
    &lt;span class="na"&gt;ingress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;django-neon-quickstart&lt;/span&gt;
        &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="nv"&gt;*seal-app-dev&lt;/span&gt;&lt;span class="err"&gt;**&lt;/span&gt;
    &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nyc&lt;/span&gt;
    &lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;build_command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
        &lt;span class="s"&gt;pip install -r requirements.txt&lt;/span&gt;
        &lt;span class="s"&gt;python manage.py makemigrations&lt;/span&gt;
        &lt;span class="s"&gt;python manage.py migrate&lt;/span&gt;
      &lt;span class="na"&gt;environment_slug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python&lt;/span&gt;
      &lt;span class="na"&gt;envs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PGHOST&lt;/span&gt;
        &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RUN_AND_BUILD_TIME&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="nv"&gt;*new_host&lt;/span&gt;&lt;span class="err"&gt;**&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PGDATABASE&lt;/span&gt;
        &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RUN_AND_BUILD_TIME&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;neondb&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PGUSER&lt;/span&gt;
        &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RUN_AND_BUILD_TIME&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;neondb_owner&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PGPASSWORD&lt;/span&gt;
        &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RUN_AND_BUILD_TIME&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="nv"&gt;*new_password&lt;/span&gt;&lt;span class="err"&gt;**&lt;/span&gt;
      &lt;span class="na"&gt;github&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="err"&gt;*&lt;/span&gt;&lt;span class="nv"&gt;*dev&lt;/span&gt;&lt;span class="err"&gt;**&lt;/span&gt;
        &lt;span class="na"&gt;deploy_on_push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dougsillars/django-neon-quickstart&lt;/span&gt;
      &lt;span class="na"&gt;http_port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8000&lt;/span&gt;
      &lt;span class="na"&gt;instance_count&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;instance_size_slug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps-s-1vcpu-1gb-fixed&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;django-neon-quickstart&lt;/span&gt;
      &lt;span class="na"&gt;run_command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gunicorn django_neon.wsgi:application --bind 0.0.0.0:8000&lt;/span&gt;
      &lt;span class="na"&gt;source_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are four changes made in this file from the original at Digital Ocean:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;name: add “-dev” to the end of the app name.&lt;/li&gt;
&lt;li&gt;PGHOST: Value should be &lt;code&gt;new_host&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;PGPASSOWRD:value should be &lt;code&gt;new_password&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;github:branch &lt;code&gt;dev&lt;/code&gt; replaces main.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;When our GitHub Action runs:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The name of the application will change in the Digital Ocean dashboard, allowing the dev to see that the current state is a dev state.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;new_host&lt;/em&gt; and &lt;em&gt;new_password&lt;/em&gt; will be programmatically updated with the values from our newly created NeonDB branch.
&lt;/li&gt;
&lt;li&gt;We want to deploy the dev branch of our code to Digital Ocean to see the changes.
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;default.yaml is used to revert the App Spec to production.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;default.yaml&lt;/em&gt; has two changes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;name:  Add “-prod” to the end of the name.&lt;/li&gt;
&lt;li&gt;PGPASSWORD:value: &lt;code&gt;new_password&lt;/code&gt; replaces the password. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;(We’re not showing the &lt;a href="https://github.com/dougsillars/django-neon-quickstart/blob/main/.do/default.yaml" rel="noopener noreferrer"&gt;entire file&lt;/a&gt; here for space reasons.)&lt;/p&gt;

&lt;p&gt;This will change the name in the Digital Ocean dashboard to show that prod is visible.  The password will revert to the original password from the primary branch of the database, and since the branch is main, the build will be the main branch.&lt;/p&gt;

&lt;p&gt;With these changes, we are now ready to begin implementing our two GitHub Actions: “Create NeonDB Branch” and “Destroy NeonDB Branch.”&lt;/p&gt;

&lt;h1&gt;
  
  
  Create NeonDB branch
&lt;/h1&gt;

&lt;p&gt;When a pull request is made to the main branch, this GitHub Action will fire and do a number of steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a development branch of the NeonDB database.

&lt;ul&gt;
&lt;li&gt;Grab the host and password of this new DB.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Check out the GitHub Code.&lt;/li&gt;
&lt;li&gt;Use a sed command to replace new_host &amp;amp; new_password placeholders in the .do/app.yaml file with the variables extracted in step 1a.
&lt;/li&gt;
&lt;li&gt;Install the Digital Ocean CLI.&lt;/li&gt;
&lt;li&gt;Update the App Spec with the new yaml file.&lt;/li&gt;
&lt;li&gt;Initiate a Digital Ocean deployment.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create Neon Branch and deploy dev to DO&lt;/span&gt;
    &lt;span class="na"&gt;run-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create a Neon Branch 🚀&lt;/span&gt;
    &lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;opened&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
    &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Create-Neon-Branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
        &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Verify NEON API Key presence&lt;/span&gt;
            &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
              &lt;span class="s"&gt;if [ -z "${{ secrets.NEON_API_KEY }}" ]; then&lt;/span&gt;
                &lt;span class="s"&gt;echo "NEON_API_KEY is empty"&lt;/span&gt;
              &lt;span class="s"&gt;else&lt;/span&gt;
                &lt;span class="s"&gt;echo "NEON_API_KEY is set"&lt;/span&gt;
              &lt;span class="s"&gt;fi&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create Neon Branch&lt;/span&gt;
            &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;create-branch&lt;/span&gt;
            &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;neondatabase/create-branch-action@v5&lt;/span&gt;
            &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;project_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;orange-violet-68318343"&lt;/span&gt;
              &lt;span class="c1"&gt;# optional (defaults to your primary  branch)&lt;/span&gt;
              &lt;span class="na"&gt;parent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt; 
              &lt;span class="c1"&gt;# optional (defaults to neondb)&lt;/span&gt;
              &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;neondb"&lt;/span&gt;
              &lt;span class="na"&gt;branch_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;development"&lt;/span&gt;
              &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;neondb_owner"&lt;/span&gt;
              &lt;span class="na"&gt;api_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;secrets.NEON_API_KEY&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo db_url ${{ steps.create-branch.outputs.db_url }}&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo host ${{ steps.create-branch.outputs.host }}&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo branch_id ${{ steps.create-branch.outputs.branch_id }}&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
            &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Replace variables in YAML&lt;/span&gt;
            &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
              &lt;span class="s"&gt;sed -i 's|new_host|'"${{ steps.create-branch.outputs.host }}"'|g' .do/app.yaml&lt;/span&gt;
              &lt;span class="s"&gt;sed -i 's|new_password|'"${{ steps.create-branch.outputs.password }}"'|g' .do/app.yaml&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install doctl&lt;/span&gt;
            &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;digitalocean/action-doctl@v2&lt;/span&gt;
            &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DO_KEY }}&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set environment variables&lt;/span&gt;
            &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
              &lt;span class="s"&gt;doctl auth init -t ${{ secrets.DO_KEY }}&lt;/span&gt;
              &lt;span class="s"&gt;# Update the app with the new specifications from neon&lt;/span&gt;
              &lt;span class="s"&gt;#  use active project id from DO url&lt;/span&gt;
              &lt;span class="s"&gt;doctl apps update 3aec3cab-fca5-4829-b5f4-1fd9d41b16a9  --spec .do/app.yaml&lt;/span&gt;
              &lt;span class="s"&gt;doctl apps create-deployment 3aec3cab-fca5-4829-b5f4-1fd9d41b16a9&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Hints on creating your action:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The NeonDB  project_id is in the URL string when you load the project in the NeonDB dashboard.&lt;/li&gt;
&lt;li&gt;The UUID for your Digital Ocean application is also found in the dashboard URL.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Save this workflow in &lt;code&gt;/.github/workflows&lt;/code&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Delete NeonDB branch
&lt;/h1&gt;

&lt;p&gt;Once the PR has been tested and approved, we want to destroy the NeonDB branch and revert the Digital Ocean deployment back to production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The steps in this are:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Delete the NeonDB development branch.&lt;/li&gt;
&lt;li&gt;Check out the code.&lt;/li&gt;
&lt;li&gt;Update the &lt;em&gt;default.yaml&lt;/em&gt; with our DB password from the GitHub Secrets.&lt;/li&gt;
&lt;li&gt;Update the App Spec and deploy the application at Digital Ocean.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete Neon Branch with GitHub Actions Demo&lt;/span&gt;
    &lt;span class="na"&gt;run-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete a Neon Branch 🚀&lt;/span&gt;
    &lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;closed&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
    &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;delete-neon-branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
        &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete Neon branch&lt;/span&gt;
            &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;neondatabase/delete-branch-action@v3&lt;/span&gt;
            &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;project_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;orange-violet-68318343"&lt;/span&gt;
              &lt;span class="na"&gt;branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;development&lt;/span&gt;
              &lt;span class="na"&gt;api_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.NEON_API_KEY }}&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
            &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Replace variables in YAML&lt;/span&gt;
            &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
              &lt;span class="s"&gt;sed -i 's|new_password|'"${{ secrets.NEON_PW }}"'|g' .do/default.yaml&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install doctl&lt;/span&gt;
            &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;digitalocean/action-doctl@v2&lt;/span&gt;
            &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DO_KEY }}&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set environment variables&lt;/span&gt;
            &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
              &lt;span class="s"&gt;doctl auth init -t ${{ secrets.DO_KEY }}&lt;/span&gt;
              &lt;span class="s"&gt;# Update the app with the new specifications from neon&lt;/span&gt;
              &lt;span class="s"&gt;#  use active project id from DO url&lt;/span&gt;
              &lt;span class="s"&gt;doctl apps update 3aec3cab-fca5-4829-b5f4-1fd9d41b16a9  --spec .do/default.yaml&lt;/span&gt;
              &lt;span class="s"&gt;doctl apps create-deployment 3aec3cab-fca5-4829-b5f4-1fd9d41b16a9&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, that is a &lt;em&gt;lot of code.&lt;/em&gt; Don’t forget to update the DO UUIDs to match your deployment. Push this all to your repo so that we can see our automation in action.&lt;/p&gt;

&lt;p&gt;Here is the production version of the application running on Digital Ocean. I added a few extra elements for fun. The screenshot shows the mouse hover color on &lt;strong&gt;Ne&lt;/strong&gt; &lt;strong&gt;(Neon)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzr4i1xd9sc6k9yapjto.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzr4i1xd9sc6k9yapjto.png" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s make some changes to the code and start a pull request.&lt;/p&gt;

&lt;p&gt;Create a dev branch for your code. Let’s change the colors in &lt;strong&gt;line 15&lt;/strong&gt; of/elements/templates/elements_list.html:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;li hx-delete="element/{{ element.id }}" hx-target="body" class="relative flex flex-col text-center p-5 rounded-md bg-[#7846a8] transition-colors hover:bg-orange-500 text-[white]"&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This should make the boxes purple, with orange hover and white text.&lt;/p&gt;

&lt;p&gt;Push the dev branch and open a pull request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3b792balxxqvrhttt3gh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3b792balxxqvrhttt3gh.png" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the pull request is created, the “Create a Neon Branch” GitHub Action is called. A branch of the NeonDB is created, and the dev code is deployed to Digital Ocean.&lt;/p&gt;

&lt;p&gt;Refreshing the application, we see the colors have been updated:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqnqicsefwa0mpt6y4vw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqnqicsefwa0mpt6y4vw.png" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More importantly, we can test all we want without worrying about the production database. Any changes made on the dev branch are in the development branch of NeonDB. As a part of our testing, we deleted a number of entries—only Neon is left:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauzscgwg47mem9sgj29p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauzscgwg47mem9sgj29p.png" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since we’re happy with the PR, we can approve and merge the changes. This fires up the second GitHub Action: deleting the NeonDB development branch and pushing the production build to Digital Ocean:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjb5iugxkpq6vwege1t9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjb5iugxkpq6vwege1t9m.png" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The new colors are in prod, and the prod database was untouched by our testing on the PR!&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In this post, we used GitHub Actions to automate creating and deleting a NeonDB database for testing pull request builds on Digital Ocean.  If you would like to look at the code, it is available on &lt;a href="https://github.com/dougsillars/django-neon-quickstart" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. You’ll just need to wire in your NeonDB and Digital Ocean credentials to get up and running.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automating Developer Relations Metrics with Low Code RunBooks</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Mon, 24 Apr 2023 22:05:02 +0000</pubDate>
      <link>https://dev.to/unskript/automating-developer-relations-metrics-with-low-code-runbooks-5ch2</link>
      <guid>https://dev.to/unskript/automating-developer-relations-metrics-with-low-code-runbooks-5ch2</guid>
      <description>&lt;p&gt;My job at unSkript is to spread awareness and excitement around the DevOps tooling we have built.  But, I am also expected to provide reporting on various metrics around developer awareness and usage of our product. In this post, I walk through how I have automated the data collection process, so that I can spend more time creating content and building awareness.&lt;/p&gt;

&lt;p&gt;&lt;span&gt;A bit of background: at unSkript, we are building automation tools to reduce toil. In the DevOps/SRE space, toil is defined as the manual and repetitive work that needs to be done to keep everything shipshape.  If you ask me – collecting metrics from a bunch of different services (Github, Google Analytics, internal databases, Docker,….), and aggregating them in one place – that sounds like toil.  So let’s automate that away, and then I no longer have to think about it (until I decide to write a blog post about it, of course.)&lt;/span&gt;&lt;/p&gt;



&lt;h2&gt;&lt;span&gt;Collecting the Data&lt;/span&gt;&lt;/h2&gt;

&lt;p&gt;unSkript is a tool to help you build RunBooks. A RunBook is a collection of steps (we call them Actions) that complete a task.  For DevOps teams, that could be &lt;a href="https://unskript.com/security-checkup-force-aws-load-balancers-to-redirect-to-https/"&gt;auto-remediation of your load balancers&lt;/a&gt;, &lt;a href="https://unskript.com/runbook-analysis-of-k8s-logs/"&gt;running health checks on a K8s cluster&lt;/a&gt;, or even monitoring your &lt;a href="https://unskript.com/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting/"&gt;daily Cloud costs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I want to use these actions to collect a bunch of different data points, and store them all in one place. There are a few different ways that I use unSkript to collect the information:&lt;/p&gt;

&lt;h2&gt;&lt;span&gt;Built in Actions&lt;/span&gt;&lt;/h2&gt;

&lt;p&gt;&lt;span&gt;unSkript comes with hundreds of built-in Actions – simply drag &amp;amp; drop into your RunBook, configure your credentials, and you are ready to go!  When using built-in Actions – unSkript can be thought of as essentially “no-code” to set up. There are several built-in Actions in unSkript that are well suited to collecting the data that I want to collect: Daily Unique users from Google Analytics, and the Github star count.  &lt;/span&gt;&lt;/p&gt;



&lt;p&gt;&lt;span&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zlNs234e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-17.31.00.jpg%3Fresize%3D300%252C85%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zlNs234e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-17.31.00.jpg%3Fresize%3D300%252C85%26ssl%3D1" alt="GA Action" width="300" height="85"&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sCgbVWJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-14.44.41.jpg%3Fresize%3D300%252C80%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sCgbVWJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-14.44.41.jpg%3Fresize%3D300%252C80%26ssl%3D1" alt="GitHub Star Action" width="300" height="80"&gt;&lt;/a&gt;NOTE: Github stars as a DevRel metric can be controversial (IMO- it is useful as an indicator metric) but feel free to leave a comment below with your thoughts.)&lt;/span&gt;&lt;/p&gt;

&lt;h2&gt;&lt;span&gt;Database Queries &lt;/span&gt;&lt;/h2&gt;

&lt;p&gt;&lt;span&gt;Many of our stats are collected from Segment, and stored in a database (and that database is awesome for in depth detailed analysis).  But I want to keep all of my high level statistics and data in one table, so, I’ll use the PostgreSQL connector to extract the datapoints I’d like into my dataset:&lt;/span&gt;&lt;/p&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6CEHmwkA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-11.53.29.jpg%3Fresize%3D300%252C232%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6CEHmwkA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-11.53.29.jpg%3Fresize%3D300%252C232%26ssl%3D1" alt="3 SQL Actions to add data" width="300" height="232"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These Actions are “low-code” in that once you drag &amp;amp; drop the action and make the connections, you still need to create a SQL query to grab the results.&lt;/p&gt;

&lt;h2&gt;&lt;span&gt;REST API&lt;/span&gt;&lt;/h2&gt;

&lt;p&gt;&lt;span&gt;There are still a few more data points that I’d like to pull out of other tools.  We have a REST API connector that makes this easy: set up your credentials and the headers you need – and you can create a new Action that extracts your data via API.  These are also “low-code” but do require some understanding of how to make API calls – in order to set up the credentials properly.&lt;/span&gt;&lt;/p&gt;



&lt;p&gt;&lt;span&gt;For example: Docker Hub publishes the number of times our Docker Image has been downloaded. We can collect this number Each day using the REST API Action – and adding the endpoint and headers to the Action:&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GI0NbCRr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-11.23.55.jpg%3Fresize%3D300%252C297%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GI0NbCRr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-11.23.55.jpg%3Fresize%3D300%252C297%26ssl%3D1" alt="" width="300" height="297"&gt;&lt;/a&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J18tsC54--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-11.25.27.jpg%3Fresize%3D300%252C99%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J18tsC54--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-11.25.27.jpg%3Fresize%3D300%252C99%26ssl%3D1" alt="" width="300" height="99"&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;

&lt;h2&gt;&lt;span&gt;Storing &amp;amp; reporting our data&lt;/span&gt;&lt;/h2&gt;

&lt;p&gt;&lt;span&gt;Once we have collected all of the data, we can create a message and post it on Slack for the team to see:&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s7yO2Xyb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-17.35.34.jpg%3Fresize%3D300%252C267%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s7yO2Xyb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-17.35.34.jpg%3Fresize%3D300%252C267%26ssl%3D1" alt="Slack Action" width="300" height="267"&gt;&lt;/a&gt;The message that is sent to the channel is a &lt;a href="https://realpython.com/python-f-strings/"&gt;Python f string&lt;/a&gt; with variables added.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ou-lwH8K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot_2023-04-24_at_12_05_04.jpg%3Fresize%3D945%252C315%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ou-lwH8K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot_2023-04-24_at_12_05_04.jpg%3Fresize%3D945%252C315%26ssl%3D1" alt="" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;This is a fun way to update the team on a daily basis…but we also want to chart this data over time.  To accomplish this, we have a table in PostgreSQL for our stats, and we just make an INSERT using the prebuilt Postgres Action (again this is low-code as you must write the SQL INSERT command:&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iaS5m6fA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-17.37.49.jpg%3Fresize%3D300%252C64%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iaS5m6fA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-17.37.49.jpg%3Fresize%3D300%252C64%26ssl%3D1" alt="" width="300" height="64"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;The data in Postgres feeds a Grafana dashboard – allowing the team access to the latest data from our metrics – and the best part of it all is that there is no daily toil required.  And many folks on the team just go to the dashboard to get the data – the DevRel team is no longer a bottleneck!&lt;/span&gt;&lt;/p&gt;

&lt;h2&gt;&lt;span&gt;Progressive enhancement&lt;/span&gt;&lt;/h2&gt;

&lt;p&gt;&lt;span&gt;As time goes on, more questions about data will arise. &lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;As an example, since the number of Actions and RunBooks in GitHub keeps increasing, I was recently asked “how many Actions do we have in GItHub today?” &lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;The first few times you are asked a question like this, you can probably get away with waving your hands, and a ballpark figure… but after being asked a few times, I knew I needed a “real” answer. Reusing an existing GitHub Action, I was able to create a file in Github with the counts that I needed.  &lt;/span&gt;&lt;span&gt;By dragging a new Action into my RunBook, writing a few lines of Python code (and a small change on the Postgres insert), I was able to easily extend the current data collection to include more data.  &lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x1LDTNH9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-17.43.42.jpg%3Fresize%3D300%252C134%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x1LDTNH9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-17.43.42.jpg%3Fresize%3D300%252C134%26ssl%3D1" alt="" width="300" height="134"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;span&gt;Aside: We can also leverage these values to create custom badges for the Github readme, and on the website – so creating the data has been a double win!&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YL2HuT7c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-17.44.33.jpg%3Fresize%3D300%252C148%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YL2HuT7c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-17.44.33.jpg%3Fresize%3D300%252C148%26ssl%3D1" alt="" width="300" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;&lt;span&gt;Scheduling&lt;/span&gt;&lt;/h2&gt;

&lt;p&gt;&lt;span&gt;Now that I have built a RunBook that collects all of data we need (so far…) -&amp;gt; I want to automate the execution of the RunBook.  Using unSkript’s Scheduler, I have set my RunBook to run at midnight GMT every day.  &lt;/span&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mXByz9xz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-15.32.28.jpg%3Fresize%3D945%252C549%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mXByz9xz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-24-at-15.32.28.jpg%3Fresize%3D945%252C549%26ssl%3D1" alt="" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, I have daily reports being created for the team, that require ZERO work on my part!&lt;/p&gt;

&lt;h2&gt;&lt;span&gt;Summary&lt;/span&gt;&lt;/h2&gt;

&lt;p&gt;&lt;span&gt;Collecting and aggregating statistics via automation frees the team to can focus on our “real” work: creating more tools and applications – and no longer spend significant time on metric collection.  At the same time, everyone has visibility into the project – showing the value of the DevRel team, without impacting their workload.&lt;/span&gt;&lt;/p&gt;


&lt;p&gt;&lt;span&gt;How does your DevRel team collect usage data?  If you’d like to give unSkript a try, check out our&lt;a href="https://us.app.unskript.io/"&gt; Free trial&lt;/a&gt;.  Join our &lt;a href="https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation"&gt;Slack Channel&lt;/a&gt;, and I would be happy to chat with you on strategies to build your RunBook to collect your analytics data.  I’ll also be happy to share the skeleton of my RunBook to get you started!&lt;/span&gt;&lt;/p&gt;

</description>
      <category>blog</category>
      <category>intelligentautomatio</category>
      <category>leadership</category>
      <category>otherposts</category>
    </item>
    <item>
      <title>Automating the GitHub *Nudge*</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Tue, 04 Apr 2023 16:46:28 +0000</pubDate>
      <link>https://dev.to/unskript/automating-the-github-nudge-3i3k</link>
      <guid>https://dev.to/unskript/automating-the-github-nudge-3i3k</guid>
      <description>&lt;p&gt;Git is a tool of Actions.  Millions of times a day, users checkout, push, pull and merge submissions to their repositories.  The scale is staggering:over 3.5 billion &lt;a href="https://octoverse.github.com/2022/developer-community"&gt;contributions were made on GitHub in 2022&lt;/a&gt;.  That’s 227 million Pull Requests merged, and over 31 million issues closed.  However, What about the PRs and the issues that fall through the cracks and are forgotten?  Should they be just left, forgotten (like the Island of Misfit toys in Rudolf the RedNosed Reindeer?)&lt;/p&gt;

&lt;p&gt;In this post, we introduce an automated RunBook that introduces the “Github nudge.” Merriam-Webster’s &lt;a href="https://www.merriam-webster.com/dictionary/nudge"&gt;definition&lt;/a&gt; for a nudge is “to prod lightly &lt;strong&gt;: &lt;/strong&gt;urge into action.”  The GitHub nudge identifies issues and PRs that have been sitting a while, and “nudges” the assignee to take a look.  By not letting the team forget that the issues exist – they are more likely to be acted upon!&lt;/p&gt;

&lt;p&gt;We’ve defined a few Actions in unSkript’s RunBook architecture to help us along this path:&lt;/p&gt;

&lt;h2&gt;Stale Issues&lt;/h2&gt;

&lt;p&gt;When issues have been assigned to a team member, but no work is being done on them, the issue has probably “gotten lost.”  Everyone has a lot to work on, and sometimes these issues just lose priority – or get superseded by other tasks.  That does not mean that they should just be ignored – resolving the issue will improve the project.&lt;/p&gt;

&lt;p&gt;In unSkript, there is an Action to find “stale” issues: that is issues that are over a certain age.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LmS7w-fH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-02-at-21.27.45.jpg%3Fresize%3D840%252C166%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LmS7w-fH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-02-at-21.27.45.jpg%3Fresize%3D840%252C166%26ssl%3D1" alt="GitHub Stale issues" width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This Action takes 3 input parameters – the Github owner and repository and the threshold (in days) upon which you define an issue as stale.  In the case of our repository &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation"&gt;https://github.com/unskript/Awesome-CloudOps-Automation&lt;/a&gt;, the owner is unskript, the repo is Awesome-CloudOps-Automation, and we set the stale threshold at 14 days.  There’s no real science around that day -it felt like a good number for our team.&lt;/p&gt;

&lt;p&gt;The response is an array of issues that surpass that threshold. With this data, we can do some quick examinations:&lt;/p&gt;

&lt;h3&gt;Issues without an assignee&lt;/h3&gt;

&lt;p&gt;Issues that have not been assigned to a resource are not going to be worked on – we all have a lot on our plate already!  In the case of Awesome-CloudOps-Automation – these are all “good first issues” for those interested in contributing.  I like to scan this list once a week for changes – and to ensure that issues that &lt;strong&gt;do&lt;/strong&gt; need work are properly assigned.&lt;/p&gt;

&lt;h3&gt;Issues with an assignee&lt;/h3&gt;

&lt;p&gt;If an issue has been around for 2 weeks, and is not yet resolved – its good to check on them to see the status – and that action is being made:&lt;/p&gt;

&lt;p&gt;For example – it appears that issue 346 is assigned to me, and I am overdue with an update (oops!):&lt;/p&gt;

&lt;pre&gt;{'assignee': NamedUser(login="dougsillars"),
   'issue_number': 346,
   'title': '[Action]: GitHub Comment on an issue'}

&lt;/pre&gt;

&lt;p&gt;In this case, the PR was already merged, but not connected to the issue, so it could be closed (whew!).&lt;/p&gt;



&lt;h2&gt;Stale Pull Requests&lt;/h2&gt;

&lt;p&gt;We can do the same thing for Pull Requests:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GYuGXQ1n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-04-at-12.14.30.jpg%3Fresize%3D578%252C168%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GYuGXQ1n--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-04-at-12.14.30.jpg%3Fresize%3D578%252C168%26ssl%3D1" alt="Stale PR Action in unSkript" width="578" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A stale PR almost feels &lt;em&gt;worse&lt;/em&gt; than a stale Issue.  When the work is completed, and ready (perhaps nearly ready) to be integrated into the codebase – there is a immediate improvement to the software.  When. PR just languishes, the code does not improve, and sometimes further improvements are blocked   The Action above gives a list of all PRs that are over a threshold (again we use 14 days) for a given GitHub repository. For example:&lt;/p&gt;

&lt;pre&gt;{331: '3 cost optimization runbooks'}&lt;/pre&gt;



&lt;p&gt;This output does not provide anyone that I can ‘nudge’, but we have another Action that we can use:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ibM2r1pg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-04-at-12.21.47.jpg%3Fresize%3D638%252C156%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ibM2r1pg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-04-at-12.21.47.jpg%3Fresize%3D638%252C156%26ssl%3D1" alt="Github PR reviewer action" width="638" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To generate all of the reviewers to nudge, I first create a list of the PR numbers that I obtained from the first Action (the output was named “stalePull”:&lt;/p&gt;

&lt;p&gt;oldPRs = []&lt;br&gt;
for pr in stalePull[1]:&lt;br&gt;
oldPRs.append(list(pr.keys())[0])&lt;br&gt;
print(oldPRs)&lt;/p&gt;

&lt;p&gt;The Get Pull Request Reviewer Action takes 3 inputs – owner, repository and PR number.&lt;/p&gt;

&lt;p&gt;Using the iteration command in unSkript, we can apply the list of pull requests (oldPRs) to the &lt;em&gt;pull_request_number&lt;/em&gt; variable:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m2cSY24c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-04-at-12.24.50.jpg%3Fresize%3D300%252C236%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m2cSY24c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-04-at-12.24.50.jpg%3Fresize%3D300%252C236%26ssl%3D1" alt="iterating through all the pull requests" width="300" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will call the Action once per pull request – giving a full output of each PR, and who is left to review.&lt;/p&gt;

&lt;pre&gt;331: ['jayasimha-raghavan-unskript', 'shloka-bhalgat-unskript']&lt;/pre&gt;

&lt;p&gt;With a little Python, we can turn this around to list every user, and the PRs they should look at.&lt;/p&gt;

&lt;pre&gt;U03QU1K184X': [267, 153]&lt;/pre&gt;

&lt;p&gt;You may notice that the username is different.  This is in anticipation of the next step:&lt;/p&gt;

&lt;h2&gt;The actual “nudge”&lt;/h2&gt;

&lt;p&gt;Ok, so we know whose issues and PR reviews are overdue.  How do we alert them?  At unSkript, the internal team uses Slack.  I have created a table that compares each Github username with their Slack ID (that’s the weird variable above).&lt;/p&gt;

&lt;p&gt;With the slack ID, I can now send a message to the team channel:&lt;/p&gt;

&lt;pre&gt;If you are listed below, can you please review the Pull requests next to your name? They have been open for 14 days.

&amp;lt;@U01UG9DRR7D&amp;gt;, please review pull requests [267, 153].
&amp;lt;@U03QU1K184X&amp;gt;, please review pull requests [267, 153].

&lt;/pre&gt;

&lt;p&gt;Adding the @ in front of the username creates an “at” in Slack, so these two users have just been effectively nudged to look at their PRs. They have now been nudged in Slack to go into GitHub and take a look at the work that is being forgotten.&lt;/p&gt;

&lt;h2&gt;Automating the Nudge!&lt;/h2&gt;

&lt;p&gt;Using the Enterprise version (or the &lt;a href="https://us.app.unskript.io/"&gt;free trial)&lt;/a&gt; of unSkript, you can schedule each RunBook.  The RUnBook I created for Awesome-CloudOps-Automation runs every Wednesday morning – alerting the team that there are some issues and pull requests that have been left behind and should be resolved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v_gUwbP_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-04-at-12.40.41.jpg%3Fresize%3D945%252C142%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v_gUwbP_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/04/Screenshot-2023-04-04-at-12.40.41.jpg%3Fresize%3D945%252C142%26ssl%3D1" alt="Slack Message" width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How do you keep your Github issues and Pull requests chugging along?  Do you have suggestions on how you might improve this RunBook?  We’d love to hear about it in the &lt;a href="https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation"&gt;unSkript Slack channel&lt;/a&gt;. Interested in trying out Github nudges with your team?  All of the Actions described above are in our &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation"&gt;Open source&lt;/a&gt; (Docker instructions are in the readme), and in our &lt;a href="https://us.app.unskript.io/"&gt;free Trial&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>github</category>
      <category>intelligentautomation</category>
      <category>runbook</category>
    </item>
    <item>
      <title>Cloud Costs: Charting Daily EC2 Usage and Cost</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Wed, 22 Mar 2023 09:25:44 +0000</pubDate>
      <link>https://dev.to/unskript/cloud-costs-charting-daily-ec2-usage-and-cost-5ahp</link>
      <guid>https://dev.to/unskript/cloud-costs-charting-daily-ec2-usage-and-cost-5ahp</guid>
      <description>&lt;p&gt;When you buy a t-shirt at the store, you (generally) pay the same price for the shirt if you are an XS or a 2XL.  In the cloud, the size of the Virtual machine you select has VERY large implications in the cost.  At AWS, launching a t2.nano instance costs $.0058 an hour – or 14 cents a day, while a t3.2xlarge instance is $.4628 – or $11.11 a day.&lt;/p&gt;

&lt;p&gt;When sizing a new system, it is common to go a “bit larger” in size to ensure that the service performs well.  This has worked well for AWS, as &lt;a href="https://www.cnbc.com/2021/09/05/how-amazon-web-services-makes-money-estimated-margins-by-service.html"&gt;over 50% of their revenue&lt;/a&gt; comes from EC2 instances. In a &lt;a href="https://unskript.com/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting/"&gt;recent post&lt;/a&gt;, we built an automated RunBook to examine our daily spend for each AWS product, and EC2 (the green line) is far and away our biggest cost center:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7EmHDpUV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-11.03.59.jpg%3Fresize%3D300%252C153%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7EmHDpUV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-11.03.59.jpg%3Fresize%3D300%252C153%26ssl%3D1" alt="AWS costs over the last 7 days" width="300" height="153"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;When looking at your highest cost center – you may wish to drill deeper into what these expenses are.  In this post, we’ll break down our daily EC2 spend by the types of instances that are running.&lt;/p&gt;

&lt;h2&gt;Creating the RunBook&lt;/h2&gt;

&lt;p&gt;In our last post, we built a RunBook that used the AWS Cost and Usage report to break down unSkript’s costs product. By running the report daily, we could chart our daily cost spend per product, and if the change day over day exceeded a threshold, we could send an alert.&lt;/p&gt;

&lt;p&gt;To study our daily EC2 usage and spend, we will use the same Cost and Usage report, and the RunBook is essentially the same – just with a different SQL query into the table.  Rather than recreate the RunBook, we will simply Duplicate the RunBook in the unSkript UI, and save it with a new name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gkCE3_F6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-14.21.11.jpg%3Fresize%3D300%252C218%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gkCE3_F6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-14.21.11.jpg%3Fresize%3D300%252C218%26ssl%3D1" alt="Duplicating a RunBook" width="300" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To run a AWS Redshift query, we need a SecretArn, the SQL Query, AWS Region and Redshift Cluster and database details.&lt;/p&gt;

&lt;p&gt;The RunBook generates these all for us, but we need to update the SqlQuery variable to query EC2 data:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sqlQuery = f"SELECT date_part(day, cast(lineitem_usagestartdate as date)) as day, product_instancetype,SUM(lineitem_usageamount)::numeric(37, 4) AS usage_hours, SUM((lineitem_unblendedcost)::numeric(37,4)) AS usage_cost FROM {tableName} WHERE length(lineitem_usagestartdate)&amp;amp;gt;8 AND product_productfamily = 'Compute Instance' AND pricing_unit IN ('Hours', 'Hrs') GROUP BY day, product_instancetype ORDER BY 1 DESC, 3 DESC, 2 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next few steps of this RunBook is unchanged from the Product Cost RunBook:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Vom7xEJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot_2023-03-20_at_14_27_50.jpg%3Fresize%3D945%252C546%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Vom7xEJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot_2023-03-20_at_14_27_50.jpg%3Fresize%3D945%252C546%26ssl%3D1" alt="" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We take the Ann and the query – and run the query – then pull the results into the RunBook.&lt;/p&gt;

&lt;p&gt;We must now change the chart – since the inputs are different. Our Columns are now “product_instancetype” and the y values are “usage_cost.”&lt;/p&gt;

&lt;p&gt;Charting this data helps us see which instance types are costing the most money:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7IvEYslu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-14.52.44.jpg%3Fresize%3D945%252C460%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7IvEYslu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-14.52.44.jpg%3Fresize%3D945%252C460%26ssl%3D1" alt="Daily EC2 costs by size" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our daily costs are very much proportional to size, with the 2xLarge and Large instances accounting for ~$20/day.  It is interesting to note that earlier this month, our t2.large went down at the same time our t2.micro grew – this could possibly be interpreted as resizing of an EC2 instance that was too large.&lt;/p&gt;



&lt;p&gt;Finally, we can build alerts to tell us if any of our costs jump by over $1 a day, or 10%.  If the costs make a jump we can send the chart EC2 usage to Slack using the &lt;em&gt;Send Image To Slack&lt;/em&gt; Action (coming this week to our Open Source).  We also send this image every Monday for a historical record:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9odRHyh5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-15.16.00.jpg%3Fresize%3D874%252C546%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9odRHyh5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-15.16.00.jpg%3Fresize%3D874%252C546%26ssl%3D1" alt="Slack message with the chart" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;When it comes to FinOps – the team in charge of understanding and accounting for your cloud bill, one of the most critical tools to have is observability into your daily spend. If there is a large change – that might be ok – but it is good to know in advance, and double check that the increase is valid – thereby potentially avoiding a large bill at the end of the month.&lt;/p&gt;



&lt;p&gt;In our &lt;a href="https://unskript.com/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting/"&gt;previous post&lt;/a&gt;, we looked at all AWS products for daily cost spend, and in this post, we dug deeper to better understand our EC2 spend – the largest percentage of our AWS bill.  Are you interested in trying out our alerting in unSkript?  Try out our Open Source Docker build (and give us a star!), or sign up for a &lt;a href="https://us.app.unskript.io/"&gt;free cloud trial&lt;/a&gt;.  Is there a segment of YOUR AWS bill that you’d like to investigate with us?  Reach out in our &lt;a href="https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation"&gt;Slack channel&lt;/a&gt;, and we’d be happy to help you create a RunBook for your use case!&lt;/p&gt;

</description>
      <category>blog</category>
      <category>cloudcosts</category>
    </item>
    <item>
      <title>Keeping your Cloud Costs in Check: Automated AWS Cost Charts and Alerting</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Mon, 20 Mar 2023 16:03:51 +0000</pubDate>
      <link>https://dev.to/unskript/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting-12p5</link>
      <guid>https://dev.to/unskript/keeping-your-cloud-costs-in-check-automated-aws-cost-charts-and-alerting-12p5</guid>
      <description>&lt;p&gt;Building and deploying infrastructure in the cloud is (by design) a very simple process. If a team is not being careful in their deployments, it can also become an &lt;strong&gt;expensive&lt;/strong&gt; process. The interplay between finance teams and cloud teams has led to a new job function – FinOps. What is FinOps? This team (or team member) works with developer teams to understand cloud needs, negotiates better prices with cloud operators, and can help translate cloud expenses and needs to the finance team.&lt;/p&gt;

&lt;p&gt;However, many companies working in the could don’t have the luxury of allocating a team member to understanding cloud costs, and it is left to the dev teams to do their best to mitigate surprise bill.  Without a full-time FInOps professional, teams will need tooling to help them better understand and control their cloud bills.&lt;/p&gt;

&lt;p&gt;The FinOps Institute has defined 6 domains for a FinOps tram:&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;Understanding Cloud Usage and Cost&lt;/li&gt;
&lt;li&gt;Performance Tracking and Benchmarking&lt;/li&gt;
&lt;li&gt;Real Time Decision Making&lt;/li&gt;
&lt;li&gt;Cloud Rate Optimization&lt;/li&gt;
&lt;li&gt;Cloud Usage Optimization&lt;/li&gt;
&lt;li&gt;Organizational Alignment&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;In this post, we’ll describe a series of automated RunBooks that check the first three boxes (and help inform several others).  What we’ll do is build automated reporting around the AWS Cost and Usage Report (CUR).&lt;/p&gt;



&lt;h2&gt;&lt;strong&gt;The AWS CUR&lt;/strong&gt;&lt;/h2&gt;

&lt;p&gt;AWS’ Cost and Usage Report is a SQL report that breaks down your AWS spend in many different ways, and at a selected interval (hourly, daily, monthly).  When generating a CUR report, a SQL file is placed into a S3 bucket at regular intervals.  To set up a CUR report for your AWS Account, the AWS Documentation has a very &lt;a href="https://docs.aws.amazon.com/cur/latest/userguide/cur-create.html"&gt;nice tutorial&lt;/a&gt;.  In this post, our CUR is updated daily, but for larger projects, you may want hourly granularity.&lt;/p&gt;

&lt;p&gt;We set our CUR report to be sent into AWS Redshift.  However, the daily updates are only added to the S3 file.  To update our Redshift table, we must regularly take the file in SQL and update the database in Redshift.&lt;/p&gt;

&lt;p&gt;We accomplish this by building a RunBook using a few new Actions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2gxuQtvE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-15-at-23.23.20.jpg%3Fresize%3D694%252C654%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2gxuQtvE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-15-at-23.23.20.jpg%3Fresize%3D694%252C654%26ssl%3D1" alt="RunBook to update table in Redshift" width="694" height="654"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;When Making a RedShift Query, we need to know the Secret Manager ARN, the AWS Region, and the Redshift Cluster/database.&lt;/p&gt;

&lt;p&gt;To get the Secret Arn, we use the &lt;em&gt;AWS GET Secrets Manager ARNAction.  &lt;/em&gt;This takes a secret name, and provides the ARN.  (This does require Secrets Manager permission in your IAM credential.)&lt;/p&gt;

&lt;p&gt;We then Create two SQL queries programatically.  The table in Redshift is named &lt;em&gt;awsbilling202303&lt;/em&gt; (since it is currently March 2023). To ensure that the table name is always correct, we generate the query programmatically, so that the table name always has the format &lt;em&gt;awsbilling&amp;lt;year&amp;gt;&amp;lt;month&amp;gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next, we perform two SQL commands:&lt;/p&gt;

&lt;p&gt;First, we TRUNCATE the table. This removes all of the rows, but keeps the columns:&lt;/p&gt;

&lt;pre&gt;truncate table awsbilling202303&lt;/pre&gt;

&lt;p&gt;Next, we COPY the rows from the SQL table in AWS:&lt;/p&gt;

&lt;pre&gt;copy awsbilling202303 from 's3://unskript-billing-doug/all/unskript-billing-doug/20230301-20230401/unskript-billing-doug-RedshiftManifest.json' credentials 'aws_iam_role=arn:aws:iam::&amp;lt;arn name&amp;gt;' region 'us-west-2' GZIP CSV IGNOREHEADER 1 TIMEFORMAT 'auto' manifest;&lt;/pre&gt;

&lt;p&gt;This query is provided in your S3 bucket.&lt;/p&gt;

&lt;p&gt;With the RunBook created, we can schedule this RunBook to run daily, ensuring that the AWS table is always up to date.&lt;/p&gt;

&lt;p&gt;NOTE:  I am not a database expert.  This is the “I have a hammer, so everything must be a nail” approach to updating the database.  There is probably a more nuanced query that could be run.&lt;/p&gt;

&lt;h2&gt;Building Charts and Alerts&lt;/h2&gt;

&lt;p&gt;Now that the data is being populated into Redshift daily, we can begin exploring our data.  In this second RunBook, we are going to extract the daily spent for each AWS service, plot the data, and create an alert for large changes in costs.&lt;/p&gt;

&lt;p&gt;Our new RunBook begins the same way as our first RunBook – generating a SQL query and executing it at RedShift.  This time we are querying Redshift for usage costs for every AWS product:&lt;/p&gt;

&lt;pre&gt;select lineitem_productcode, 
        date_part(day, cast(lineitem_usagestartdate as date)) as day, 
        SUM((lineitem_unblendedcost)::numeric(37,4)) as cost 
from awsbilling202303 
group by lineitem_productcode, day 
order by cost desc;

&lt;/pre&gt;

&lt;p&gt;This query is then placed into a dataframe. Using this data, we can create a chart of our daily spend by AWS product:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Orj5jUH8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-11.03.39.jpg%3Fresize%3D945%252C496%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Orj5jUH8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-11.03.39.jpg%3Fresize%3D945%252C496%26ssl%3D1" alt="AWS Product Cost by day" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the same chart – just looking at the last seven days:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BWCrLf1L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-11.03.59.jpg%3Fresize%3D945%252C482%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BWCrLf1L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-11.03.59.jpg%3Fresize%3D945%252C482%26ssl%3D1" alt="AWS costs over the last 7 days" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we look back at the FinOps Institute bullet list, we are beginning the FinOps Institute bullets: &lt;em&gt;Understanding Cloud Usage and Cost&lt;/em&gt;, as well as &lt;em&gt;Tracking our Performance&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;If we automate this RunBook to run daily (after the table is updated), we can generate this regular plot of our AWS Cloud Spend.  In general, a daily chart with no change is not of great interest.  But – if we can build an alert around our Cloud usage that finds significant increases in day to day cost – we can attach this chart to make the cost jump easy to identify.&lt;/p&gt;

&lt;h2&gt;Building an Alert&lt;/h2&gt;

&lt;p&gt;Every organization will have different thresholds for alerting. In the following code, we examine the 2 days previous (March 18 and 19) and look for increases of over 5%.  Since many of the services in this chart are at very low spend rates, we add the additional filter that the change must be over $1. Looping over each service with the following:&lt;/p&gt;

&lt;pre&gt;if abs(todayCost-yesterdayCost) &amp;gt;1: 
   if delta &amp;gt;.05:
       #print( instance, delta,dfpivot.at[today, instance], dfpivot.at[yesterday, instance])
       bigchange[instance] = {"delta":delta, "todayCost":todayCost,"yesterdayCost":yesterdayCost}
       alertText = '&lt;a class="mentioned-user" href="https://dev.to/here"&gt;@here&lt;/a&gt; There has been a large change in AWS Costs'
       alert = True
if date.today().weekday() == 0:
   alertText = 'Today is Monday, Here is the last week of AWS Costs'
   alert = True&lt;/pre&gt;

&lt;p&gt;If any of the changes in cost trigger this alert, we send an image on Slack with a ‘&lt;a class="mentioned-user" href="https://dev.to/here"&gt;@here&lt;/a&gt;, there’s been a large change in AWS Spending” message.  We also send the chart every day on Monday., so that there is a visual history of AWS spending.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DtPe_Aon--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-11.32.19.jpg%3Fresize%3D945%252C510%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DtPe_Aon--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/03/Screenshot-2023-03-20-at-11.32.19.jpg%3Fresize%3D945%252C510%26ssl%3D1" alt="Slack message with chart of AWS spending" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When beginning to study and alert on your Cloud Spending it is important to start with simple reports and altering. This use case of daily spend by product, is a great start into our journey into the FinOps Institute’s next bullet points: &lt;em&gt;Real Time Monitoring&lt;/em&gt; and &lt;em&gt;Cloud Usage Optimization&lt;/em&gt;.&lt;/p&gt;



&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Many companies feel the monthly dread of “how big will my Cloud bill be &lt;strong&gt;this&lt;/strong&gt; month?” By charting and alerting on your daily spend across all Cloud products, your team is less likely to be surprised by the beill at the end of the month. This data can also be used mitigate large changes that often result in bill “surprises.”&lt;/p&gt;

&lt;p&gt;Using unSkript and the AWS Cost and Usage report, you can begin (or continue) your FinOps journey by better understanding where your costs are coming from and how they are changing day to day.  Watch our blog for more posts on Cloud CostOps and how you can monitor your AWS bill.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>blog</category>
      <category>cloudcosts</category>
      <category>cloudops</category>
    </item>
    <item>
      <title>AWS Service Quotas: Discovering Where you Stand</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Fri, 17 Feb 2023 19:31:01 +0000</pubDate>
      <link>https://dev.to/unskript/aws-service-quotas-discovering-where-you-stand-4kc2</link>
      <guid>https://dev.to/unskript/aws-service-quotas-discovering-where-you-stand-4kc2</guid>
      <description>&lt;p&gt;We’ve written a few posts in the last week about AWS Service Quotas.  These are restrictions on services that are set by AWS (but can often be increased).&lt;/p&gt;

&lt;p&gt;If our &lt;a href="https://unskript.com/aws-service-quotas-what-are-they-and-how-can-i-increase-them/" rel="noopener noreferrer"&gt;first post, we looked at New Actions in unSkript&lt;/a&gt; that can be used to determine quota values and request a quota increase.  In this post, we’ll take the Actions a step further, and build an Action that compares the AWS quota to actual usage – generating an alert when a threshold is met.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;To begin, we will need to consider how the Action will work.  For any given service, we’ll need to query AWS at least twice:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get the Quota Limit.&lt;/li&gt;
&lt;li&gt;Determine the usage of a service.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every query requires one call to complete Step 1.  However, Step two can require many queries to complete the usage query.  In the simplest case, we can do just one query:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Example: _Client VPN Endpoints per Region. _ If we query AWS for the list of endpoints in a region, we can simply get the length of the response to know how many endpoints exist.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, there are times where there will be multiple queries:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Example: &lt;em&gt;Routes per Client VPN Endpoint&lt;/em&gt;. In the first query, we get the list of VPN endpoints.  In step 2, we must query every VPN endpoint to get the count of Routes.  If there are 4 VPN endpoints, there will be a total of 5 calls made (On call to get the list of 4 VPN endpoints, and then one call to each of the four endpoints).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To account for these two options, we create an input Dictionary.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Simple, one pass Dictionary
&lt;/h2&gt;

&lt;p&gt;For the &lt;em&gt;Describe AMIs&lt;/em&gt; call (only one Usage query is required), the Dict looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{'QuotaName':'AMIs','ServiceCode':'ec2','QuotaCode': 'L-B665C33B',
'ApiName': 'describe_images', 'ApiFilter' : '[]','ApiParam': 'Images',
'initialQuery': ''},

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get the Quota, we need the ServiceCode and the QuotaCode (If you need to obtain these variables, you can use the unSkript Action, or you can refer to the table in the &lt;a href="https://docs.unskript.com/unskript-product-documentation/lists/test" rel="noopener noreferrer"&gt;unSkript Docs)&lt;/a&gt;.  The one usage API call will be made to the &lt;em&gt;describe_images&lt;/em&gt; endpoint, and retrieve a list of &lt;em&gt;Images&lt;/em&gt;.  Counting this length gives us our usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Two Pass Dictionary
&lt;/h2&gt;

&lt;p&gt;To determine the &lt;em&gt;Attachments per transit gateway&lt;/em&gt;, we must again get the quota from the Service Code and Quota Code.  To get the count of attachments per transit gateway, we us the &lt;em&gt;initalQuery&lt;/em&gt; array to make a first query.&lt;/p&gt;

&lt;p&gt;The first query probes the &lt;em&gt;describe_transit_gateways&lt;/em&gt; endpoint, to give a list of &lt;em&gt;TransitGateways. _ In the second set of calls, we call the _describe_transit_gateway_attachments&lt;/em&gt; endpoint for each transit gateway. The filter has a string VARIABLE that is replaced with the &lt;em&gt;TransitGatewayId&lt;/em&gt; for each gateway -ensuring that each call is made to a different transit gateway.  We can then count the length of the response to find out how many attachments are in each transit gateway.  If we have 12 transit gateways. we will have 12 usage reports.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{'QuotaName':'Attachments per transit gateway','ServiceCode':'ec2','QuotaCode': 'L-E0233F82',
'ApiName': 'describe_transit_gateway_attachments', 'ApiFilter' : '[{"Name": "transit-gateway-id","Values": ["VARIABLE"]}]',
'ApiParam': 'TransitGatewayAttachments',
'initialQuery': '["describe_transit_gateways","TransitGateways", "TransitGatewayId"]'},

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Outliers
&lt;/h2&gt;

&lt;p&gt;For most of our quota measurements, these two approaches work well.  However, with over 2600 different quotas inside AWS, not all of them fit neatly into these two buckets. For example &lt;em&gt;Multicast Network Interfaces per transit gateway&lt;/em&gt; requires 3 calls: Transit gateways -&amp;gt; Multicast Domains – &amp;gt; Domain attachments.&lt;/p&gt;

&lt;p&gt;For others, there is custom code to iterate over.  These require an extra if statement in the code to properly account for their usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Action Format
&lt;/h2&gt;

&lt;p&gt;We can differentiate between the two types of query by looking at the ‘initialQuery’ parameter. If it is empty, we can do the Simple query, otherwise, do the double query (with a for loop that queries each initial result).  For outliers, we can add specific code inside the if/else:&lt;/p&gt;

&lt;p&gt;(this is simplified a bit from what actually runs):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for i in table: 
    #get quota 
    sq = sqClient.get_service_quota(ServiceCode=i.get('ServiceCode'),QuotaCode=i.get('QuotaCode')) 
    quotaValue =sq['Quota']['Value'] 

    #get usage     
    if i.get('initialQuery') = '': 
       res = aws_get_paginator(ec2Client, i.get('ApiName'), i.get('ApiParam'), Filters=filterList) 
       count = len(res) 
       percentage = count/quotaValue 
       combinedData = {'Quota Name': i.get('QuotaName'), 'Limit':quotaValue, 'used': count, 'percentage':percentage} 
       result.append( combinedData) 
       print(combinedData) else: res = aws_get_paginator(ec2Client, i.get('ApiName'), i.get('ApiParam'), Filters=filterList) 
       for j in res: #build the filter query with some simple substitutions res2 = aws_get_paginator(ec2Client, i.get('ApiName'), i.get('ApiParam'), Filters=filterList) count = len(res2) percentage = count/quotaValue objectResult = {j[initialQueryFilter] : count} quotaName = f"{i.get('QuotaName')} for {j[initialQueryFilter]}" combinedData = {'Quota Name': quotaName, 'Limit':quotaValue, 'used': count, 'percentage':percentage} result.append(combinedData)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Action Output
&lt;/h2&gt;

&lt;p&gt;Once all of the values have been collected, the percentage utilized is compared to the warning percentage input. If the utilization is over the requested percentage, the Service data will be added to the output of the Action. With this information, the SRE responsible can decide the correct Action to take – either prune away some usage, or request an increase from AWS.&lt;/p&gt;

&lt;p&gt;For example, testing all VPC Service quotas with a earning of 50% utilization gives the following data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{'Instances': [{'Limit': 20.0,
                'Quota Name': 'VPCs Per Region',
                'percentage': 0.65,
                'used': 13},
               {'Limit': 20.0,
                'Quota Name': 'Internet gateways per Region',
                'percentage': 0.6,
                'used': 12},
               {'Limit': 5.0,
                'Quota Name': 'NAT gateways per Availability Zone',
                'percentage': 0.8,
                'used': 4},
               {'Limit': 50.0,
                'Quota Name': 'Routes per route table',
                'percentage': 0.5,
                'used': 25},
               {'Limit': 20.0,
                'Quota Name': 'Rules per network ACL',
                'percentage': 0.65,
                'used': 13}]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Availability Today
&lt;/h2&gt;

&lt;p&gt;As we publish this article, we have 2 Actions heading into unSkript:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A general &lt;em&gt;AWS_ServiceQuota Compare&lt;/em&gt; Action that has the basic framework described above. This will likely require customization for each Quota you wish to test against.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;AWS VPC Service Quota Warning&lt;/em&gt;. This Action takes all of the VPC service quotas (as of February 2023) and tests them against your infrastructure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Coming Soon:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;em&gt;AWS EC2 Service Quota Warning&lt;/em&gt;. This Action will test your infrastructure against all EC2 Service Quotas, and warn you if you are approaching the quota threshold.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;WE’re really excited to see how people use these Service Quota alerts in their infrastructure.  If you have questions – feel free to reach out in our &lt;a href="https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation" rel="noopener noreferrer"&gt;Slack Community&lt;/a&gt;.  If you haven’t tried unSkript – try our &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation" rel="noopener noreferrer"&gt;OSS Docker Container&lt;/a&gt;, or use our &lt;a href="https://us.app.unskript.io/" rel="noopener noreferrer"&gt;free trial online!&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>intelligentautomatio</category>
      <category>otherposts</category>
      <category>servicehealth</category>
    </item>
    <item>
      <title>AWS Service Quotas, or AWS has a LOT of Services!</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Wed, 15 Feb 2023 18:03:21 +0000</pubDate>
      <link>https://dev.to/unskript/aws-service-quotas-or-aws-has-a-lot-of-services-cp4</link>
      <guid>https://dev.to/unskript/aws-service-quotas-or-aws-has-a-lot-of-services-cp4</guid>
      <description>&lt;p&gt;In our &lt;a href="https://unskript.com/aws-service-quotas-what-are-they-and-how-can-i-increase-them/" rel="noopener noreferrer"&gt;recent post&lt;/a&gt;, we unveiled unSkript Actions that can query AWS Service Quotas.  Service quotas are limits imposed by AWS on how many times a certain AWS feature can be used.  Most of them are adjustable with a simple request, and our post showed how to determine your Service Quota values, AND request an increase using unSkript.&lt;/p&gt;

&lt;p&gt;In this post, I thought it might be fun to dig into AWS Service Quotas a bit deeper, and get s general idea of how Service Quotas fit into the AWS landscape.&lt;/p&gt;

&lt;p&gt;To get the Service Quota value, you need to know the Service Name, and the Quota Code.  But how do you get these values?&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Service Names
&lt;/h2&gt;

&lt;p&gt;We can get all of the AWS Service Names using the Service Node endpoint. Running this call, we find that there are 221 named services in AWS (as of Feb 15, 2023).  AWS gets a &lt;a href="https://ben11kehoe.medium.com/dear-aws-we-need-to-talk-about-service-naming-d33ea68027d8" rel="noopener noreferrer"&gt;lot&lt;/a&gt; &lt;a href="https://expeditedsecurity.com/aws-in-plain-english/" rel="noopener noreferrer"&gt;of&lt;/a&gt; &lt;a href="https://twitter.com/QuinnyPig/status/1070451608050315264" rel="noopener noreferrer"&gt;flak&lt;/a&gt; for their naming conventions, but with so many services, if course some are going to have sup-optimal names. Lucky for us, we’ll be using the ServiceCode, and not the Service Name, so “_ &lt;strong&gt;AWS Systems Manager Incident Manager Contacts&lt;/strong&gt; &lt;em&gt;” is simply “ **_ssm-contacts&lt;/em&gt;** ” and “&lt;em&gt;&lt;strong&gt;AWS IAM Identity Center (successor to AWS Single Sign-On)&lt;/strong&gt;&lt;/em&gt;” is just “_ &lt;strong&gt;sso.&lt;/strong&gt; _”&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Service Quotas
&lt;/h2&gt;

&lt;p&gt;Next, we can run these 221 named services against the “List Service Quotas” endpoint to get all of the Service Quotas for all of the Services.  Only 113 of AWS Services (51%) have features that have a service quota.  Even with just half of services having quotas – there are a _ &lt;strong&gt;LOT&lt;/strong&gt; _ of preset quotas in AWS:  2,629 of them in fact! (Feb 15,2023)&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Service Code&lt;/th&gt;
&lt;th&gt;Count of  Quota Name&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;sagemaker&lt;/td&gt;
&lt;td&gt;702&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ec2&lt;/td&gt;
&lt;td&gt;131&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;iotwireless&lt;/td&gt;
&lt;td&gt;102&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kinesisvideo&lt;/td&gt;
&lt;td&gt;82&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;rekognition&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;personalize&lt;/td&gt;
&lt;td&gt;66&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cases&lt;/td&gt;
&lt;td&gt;65&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;braket&lt;/td&gt;
&lt;td&gt;62&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;elasticmapreduce&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;geo&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;comprehend&lt;/td&gt;
&lt;td&gt;56&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kms&lt;/td&gt;
&lt;td&gt;53&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;lookoutmetrics&lt;/td&gt;
&lt;td&gt;44&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sns&lt;/td&gt;
&lt;td&gt;44&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;logs&lt;/td&gt;
&lt;td&gt;41&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;apigateway&lt;/td&gt;
&lt;td&gt;38&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;iotcore&lt;/td&gt;
&lt;td&gt;37&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;chime&lt;/td&gt;
&lt;td&gt;36&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;forecast&lt;/td&gt;
&lt;td&gt;33&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ebs&lt;/td&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;glue&lt;/td&gt;
&lt;td&gt;28&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;acm-pca&lt;/td&gt;
&lt;td&gt;26&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;fsx&lt;/td&gt;
&lt;td&gt;26&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;robomaker&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;rds&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;elasticloadbalancing&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dataexchange&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mediapackage&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;monitoring&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cognito-idp&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;elasticfilesystem&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;vpc&lt;/td&gt;
&lt;td&gt;21&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;fis&lt;/td&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;servicecatalog&lt;/td&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;omics&lt;/td&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ssm-contacts&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;application-autoscaling&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ssm&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cassandra&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;frauddetector&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;imagebuilder&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;servicequotas&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;workspaces&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;textract&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;lex&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;events&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;amplify&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;appmesh&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;route53resolver&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;iottwinmaker&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;databrew&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;medialive&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;rolesanywhere&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mgn&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;access-analyzer&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;eks&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;groundstation&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;nimble&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dms&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dynamodb&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;workspaces-web&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ecr&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ivschat&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;profile&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;batch&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;proton&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mediastore&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;drs&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;appconfig&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;pinpoint&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;schemas&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cloudformation&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;athena&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;m2&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ivs&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ec2-ipam&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;es&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;refactor-spaces&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kafka&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;resiliencehub&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;app-integrations&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;network-firewall&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;resource-explorer-2&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWSCloudMap&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;qldb&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ssm-sap&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;mediaconnect&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;auditmanager&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sms&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;lambda&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;guardduty&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ram&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ses&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;autoscaling&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;fargate&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;cloudhsm&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;outposts&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;airflow&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;dlm&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;license-manager&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ssm-guiconnect&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;connect-campaigns&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;connect&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;macie2&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;iotanalytics&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;emr-serverless&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;codeguru-profiler&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;autoscaling-plans&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;firehose&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;codeguru-reviewer&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kinesis&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;grafana&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ec2fastlaunch&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As Machine Learning can be a very expensive process, it is no surprise that Amazon SageMaker leads the pack with over 700 service quotas.  2nd in line is an oldie but a goodie, Amazon’s EC2 (debuted in 2006!) with 131 quotas.&lt;/p&gt;

&lt;h2&gt;
  
  
  What do we know about quotas?
&lt;/h2&gt;

&lt;p&gt;The longest quota name belongs to Rekognition, and it is quote a mouthful:&lt;/p&gt;

&lt;p&gt;_ &lt;strong&gt;Transactions per second per account for the Amazon Rekognition Image personal protective equipment operation DetectProtectiveEquipment&lt;/strong&gt;. _with a quota of 5.  That’s a lot of words to say that the service can scan 5 frames per second to identify a helmet, face mask or gloves on anyone in the picture.  This quota can be adjusted, if desired.&lt;/p&gt;

&lt;p&gt;The largest Quota is &lt;strong&gt;ElasticFileSystem’s file size&lt;/strong&gt; , weighing in at 52673613135872 bytes.  (Which, if I did my math correctly, is 47.9 TB).  This is a hard limit and cannot be adjusted.&lt;/p&gt;

&lt;p&gt;The second largest quota is the &lt;strong&gt;Maximum number of rows in a dataset&lt;/strong&gt; for &lt;strong&gt;Amazon Forecast,&lt;/strong&gt; with a soft limit of 3 billion rows.  You can request that this number be increased.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Funskript.com%2Fwp-content%2Fuploads%2F2023%2F02%2F3billion.jpeg%3Fresize%3D500%252C500%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi0.wp.com%2Funskript.com%2Fwp-content%2Fuploads%2F2023%2F02%2F3billion.jpeg%3Fresize%3D500%252C500%26ssl%3D1" alt="3 billion rows meme" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Quota adjustment
&lt;/h2&gt;

&lt;p&gt;Of the 2,629 quotas, 2,003 can be adjusted (76%), and 626 (24%) cannot be changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quota Units
&lt;/h2&gt;

&lt;p&gt;Only 87 of our quotas have units (3.3%).  20 are time based, and the remaining 67 are data sizes (of varying magnitude):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Unit&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Millisecond&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Second&lt;/td&gt;
&lt;td&gt;18&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These vary from 200 ms ( &lt;strong&gt;API Gateway Maximum Integration Timeout&lt;/strong&gt; ) to 30 days: &lt;strong&gt;SageMaker’s Longest run time for an AutoML job from creation to termination&lt;/strong&gt;. In case you were wondering, 30 days is also 2,592,000 seconds.)&lt;/p&gt;

&lt;p&gt;When it comes to units, there’s nothing like arbitrarily multiplying by 1024 to change the units (and I see you MegaBits and your extra x8… but these are all throughput, so I’ll give that a pass).&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Unit&lt;/th&gt;
&lt;th&gt;Count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Bytes&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kilobytes&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Megabits&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Megabytes&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gigabytes&lt;/td&gt;
&lt;td&gt;22&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Terabytes&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The smallest value is &lt;strong&gt;Lookout Metrics Value Length&lt;/strong&gt; at 40 Bytes, and the largest is &lt;strong&gt;RDS Total storage for all DB instances&lt;/strong&gt; at 100,000 GB (or 97 TB).&lt;/p&gt;

&lt;p&gt;The winner for the oddest size measurement goes to &lt;strong&gt;&lt;em&gt;Elasticfilesystem’s Throughput per NFC Client&lt;/em&gt;&lt;/strong&gt; at 524.288 MegaBytes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;While looking at the giant list of AWS Service Quotas, I thought it might be fun to look at the data more closely. It remains to be seen whether the unSkript team will continue to let me use PivotTables to look at data.&lt;/p&gt;

&lt;p&gt;More importantly, the list of Service quotas – with the Service Code and Quota Code are all in one table, and we have published the Feb 15, 2023 list in the &lt;a href="https://docs.unskript.com/unskript-product-documentation/lists/test" rel="noopener noreferrer"&gt;unSkript Docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you’re interested in learning more about unSkript, join our &lt;a href="https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation" rel="noopener noreferrer"&gt;Slack Community&lt;/a&gt;, or check out our &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation" rel="noopener noreferrer"&gt;Open Source&lt;/a&gt; repo, you can run unSkript Open Source locally with Docker!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>leadership</category>
      <category>otherposts</category>
    </item>
    <item>
      <title>AWS Service Quotas: What are they, and how can I increase them?</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Tue, 14 Feb 2023 00:35:30 +0000</pubDate>
      <link>https://dev.to/unskript/aws-service-quotas-what-are-they-and-how-can-i-increase-them-hjd</link>
      <guid>https://dev.to/unskript/aws-service-quotas-what-are-they-and-how-can-i-increase-them-hjd</guid>
      <description>&lt;p&gt;Everywhere we go, there are limits – speed limits, weight limits, capacity limits.  AWS is no different.  For many of the services offered by AWS, there is a service quota limit – the maximum number of that feature each account holder is allowed to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  *&lt;em&gt;Can we determine the limits imposed by AWS? *&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;Yes, the AWS &lt;em&gt;service-quotas&lt;/em&gt; API endpoint can tell us what quotas exist and their values?&lt;/p&gt;

&lt;h2&gt;
  
  
  *&lt;em&gt;Can we ask for a limit increase? *&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;For most of the quotas, you can ask for your limit to be increased.&lt;/p&gt;

&lt;p&gt;In this post, we’ll use Actions in unSkript to automate work around Service Quotas – determining limits, and asking for an increase to a quota.  All of these Actions are soon to be a part of the unSkript &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation" rel="noopener noreferrer"&gt;Aweseome Runbooks GitHub repository.&lt;/a&gt; They’ll also be included in all instances on unSkript.&lt;/p&gt;

&lt;h2&gt;
  
  
  Probing the AWS Service Quota Surface
&lt;/h2&gt;

&lt;p&gt;The first step in understanding AWS Service Quotas is to look at the API.  There are a few interesting endpoints, but in order to query a specific service or request a quota increase, we will need three items:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Region&lt;/li&gt;
&lt;li&gt;Service Code&lt;/li&gt;
&lt;li&gt;Quota Code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We probably have a good idea what AWS Region our stack is deployed to – that’s the easy part. We still need to understand and figure out the Service Code and the Quota Code for our service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Service Codes
&lt;/h2&gt;

&lt;p&gt;Service codes describe the top level services that AWS offers (think S3, EC2, etc.) To obtain the AWS Service Codes, we can utilize the &lt;em&gt;list-services&lt;/em&gt; endpoint in our Action named &lt;strong&gt;AWS Get All Service Names v1&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def aws_get_all_service_names(handle, region:str) -&amp;gt; List:
    sqClient = handle.client('service-quotas',region_name=region)
    resPaginate = aws_get_paginator(sqClient,'list_services','Services',PaginationConfig={
        'MaxItems': 1000,
        'PageSize': 100
        })

    #res = sqClient.list_services(MaxResults = 100)
    return resPaginate

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This lists all of the AWS Services by name and their service code. At the time of this writing, there are 220 ( &lt;strong&gt;edit&lt;/strong&gt; – it is now 221!) services in the output.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quota Codes
&lt;/h2&gt;

&lt;p&gt;Quota codes are available in the AWS Console interface, but it requires a lot of digging to get them (and sometimes it is easiest to find them in the url string). An easier way is via the API:&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;list_service_quotas&lt;/em&gt; endpoint takes a Service Code (from the first list), and outputs all of the service quotas for that service. The &lt;strong&gt;AWS Get Service Quotas for a Service v1&lt;/strong&gt; Action obtains the codes, given a Service Code&lt;/p&gt;

&lt;p&gt;Setting the Service Code to “ec2”, and the Region to “us-west-2”, we get 129 different Quotas (and the quota code for each one of them).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    sqClient = handle.client('service-quotas',region_name=region)
    resPaginate = aws_get_paginator(sqClient,'list_service_quotas','Quotas',
        ServiceCode=service_code,
        PaginationConfig={
            'MaxItems': 1000,
            'PageSize': 100
        })

    #res = sqClient.list_services(MaxResults = 100)
    return resPaginate

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  A sample output
&lt;/h2&gt;

&lt;p&gt;Here is a sample of the EC2 output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{'ServiceCode': 'ec2', 'ServiceName': 'Amazon Elastic Compute Cloud (Amazon EC2)', 'QuotaArn': 'arn:aws:servicequotas:us-west-2:100498623390:ec2/L-70015FFA', 'QuotaCode': 'L-70015FFA', 'QuotaName': 'AMI sharing', 'Value': 1000.0, 'Unit': 'None', 'Adjustable': True, 'GlobalQuota': False}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This says that the limit for the number of Amazon Machine Images (AMIs) that you can share is 1000.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requesting a Quota Increase
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What if we wanted to share 1001 AMIs?&lt;/strong&gt; We can request a quota increase via API. In the several attempts I have made, they were all grated automatically – but not immediately. Using the &lt;em&gt;request service quota increase&lt;/em&gt; endpoint in the &lt;strong&gt;AWS Request Service Increase&lt;/strong&gt; Action adds your request to the AWS queue for processing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def aws_get_service_quotas(handle, service_code:str, quota_code:str, new_quota:float,region:str) -&amp;gt; Dict:
    sqClient = handle.client('service-quotas',region_name=region)
    res = sqClient.request_service_quota_increase(
        ServiceCode=service_code,
        QuotaCode=quota_code,

        DesiredValue=new_quota)

    #res = sqClient.list_services(MaxResults = 100)
    return res

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Action has 3 inputs – the Service_code, the quota_code, and the integer value that you would like the quota changed to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, we built Actions using unSkript to learn what service quotas exist in your AWS Account, and how to update them. The Actions described in this post are built into unSkript’s automation engine, allowing you to build custom RunBooks around service quotas in your AWS environment.&lt;/p&gt;

&lt;p&gt;Are you interested in learning more? Try out our Open Source Docker build. &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation" rel="noopener noreferrer"&gt;Instructions can be found in the GitHub Readme file.&lt;/a&gt; If you have questions, join our &lt;a href="https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation" rel="noopener noreferrer"&gt;Slack channel&lt;/a&gt;, where the community will be happy to help you!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>servicequota</category>
      <category>automations</category>
    </item>
    <item>
      <title>Copying AWS Amazon Machine Images Across Regions with unSkript</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Tue, 24 Jan 2023 20:29:29 +0000</pubDate>
      <link>https://dev.to/unskript/copying-aws-amazon-machine-images-across-regions-with-unskript-2ffc</link>
      <guid>https://dev.to/unskript/copying-aws-amazon-machine-images-across-regions-with-unskript-2ffc</guid>
      <description>&lt;p&gt;When you are building a distributed platform, You’ll need to regularly update the machines that you have deployed around the world.  With Amazon Web Services (AWS), one way to do this is to deploy the updated machine in one region, and create an Amazon Machine Image (AMI) of that server.  Then, by copying that AMI to different AWS regions, you can easily deploy an identical server anywhere around the world.&lt;/p&gt;

&lt;p&gt;Copying AMIs across regions is possible to do via the AWS console UI, or using the AWS command line.  At unSkript, we are working to remove the manual toil of running scripts against the command line, or performing multi-step manual processes in the UI.  With hundreds of pre-built connectors and Actions to perform common tasks with the most popular cloud services – it is easy to get started with unSkript in just minutes.  Try it today – either with our&lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation"&gt;Open Source Docker image&lt;/a&gt; that you can run on-premises, or with a&lt;a href="https://us.app.unskript.io/"&gt;free trial of our Cloud&lt;/a&gt; offering.&lt;/p&gt;

&lt;h1&gt;
  
  
  Copying AMIs across regions
&lt;/h1&gt;

&lt;p&gt;In your install of unSkript, there will be a RunBook called “&lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation/blob/master/AWS/Copy_ami_to_all_given_AWS_regions.ipynb"&gt;&lt;em&gt;Copy AMI to All Given AWS Regions.&lt;/em&gt;&lt;/a&gt;”  This RunBook is pre-configured to perform the AMI copy with just the click of a button.  When you open this RunBook (or import your copy into your work area), you’ll only have a few steps before you can begin copying your AMIs.&lt;/p&gt;

&lt;p&gt;At the top of the page, there is a Parameters drop down, where the input parameters for the RunBook are entered:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pFlAQB1W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/01/parameters_moveami.jpg%3Fresize%3D945%252C464%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pFlAQB1W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/01/parameters_moveami.jpg%3Fresize%3D945%252C464%26ssl%3D1" alt="A Screenshot of RunBook input parameters" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the screenshot above, we have an &lt;em&gt;ami_id&lt;/em&gt; (with pixelated name) with a &lt;em&gt;source_region&lt;/em&gt; of us-west-2.  We are looking to copy this AMI to the &lt;em&gt;destination_regions&lt;/em&gt; us-east1 and us-east-2.  These can be edited with the Edit Value button. When you run this RunBook via the UI – you will be give the opportunity to insert different values.&lt;/p&gt;

&lt;h1&gt;
  
  
  Configuring the Actions
&lt;/h1&gt;

&lt;p&gt;Each step of the RunBook is completed by an Action.  We will need to quickly configure each Action with credentials that allow unSkript to connect to AWS.  If you do not yet have &lt;a href="https://docs.unskript.com/unskript-product-documentation/guides/connectors/aws#authentication"&gt;AWS Credentials&lt;/a&gt; in unSkript, follow the link to set up your credentials.  To Configure your action, click the Configurations button.  On the right side, select the AWS credential you’d like the RunBook to connect with:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I_b77tuc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/01/Screenshot-2023-01-24-at-14.59.07.jpg%3Fresize%3D902%252C414%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I_b77tuc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/01/Screenshot-2023-01-24-at-14.59.07.jpg%3Fresize%3D902%252C414%26ssl%3D1" alt="AWS credential" width="800" height="367"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Connecting with the DevRole credential&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Repeat this for all AWS Actions in the RunBook:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get all AWS Regions&lt;/li&gt;
&lt;li&gt;Get AMI Name&lt;/li&gt;
&lt;li&gt;Copy AMI to Other Regions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the credentials are set up, save your RunBook.  It can now be run from this page in two ways: Run XRunBook button: or interactively – running one Action at a time. But it can now also be run via the RunBook listing page, or using our API.&lt;/p&gt;

&lt;p&gt;Here is the interactive output of the final Action, where the AMI image was copied into us-east-1 and us-east-2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QC4PFHhY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/01/copycompleted.jpg%3Fresize%3D945%252C230%26ssl%3D1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QC4PFHhY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://i0.wp.com/unskript.com/wp-content/uploads/2023/01/copycompleted.jpg%3Fresize%3D945%252C230%26ssl%3D1" alt="screenshot of the completion of the runbook - 2 AMIs are copied into new regions" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;In this post, we utilized unSkript to automate the copying of AMI’s across AWS regions.  Rather than build a RunBook ourselves, we were able to utilize a pre-built RunBook, and simply configure the RuBook for our use case.  With just a few steps, we were able to authenticate this RunBook to run in our AWS instance, and copy AMI images from one region to many other regions.  By automating such repetitive tasks, unSkript alleviates DevOps toil of repetitive tasks, allowing the team to focus on more important projects.&lt;/p&gt;

&lt;p&gt;Give us a &lt;a href="https://us.app.unskript.io/xRunBooks"&gt;try today&lt;/a&gt;, and if you like what you see, give us a &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation"&gt;star on GitHub&lt;/a&gt;! Questions?  Join our &lt;a href="https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation"&gt;Slack Community&lt;/a&gt; – we’d be happy to help.&lt;/p&gt;

</description>
      <category>otherposts</category>
    </item>
    <item>
      <title>Managing your Cloud Costs with CloudOps Automation Part 1: Identifying Your Resources with Tags</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Thu, 15 Dec 2022 19:01:48 +0000</pubDate>
      <link>https://dev.to/unskript/managing-your-cloud-costs-with-cloudops-automation-part-1-identifying-your-resources-with-tags-25il</link>
      <guid>https://dev.to/unskript/managing-your-cloud-costs-with-cloudops-automation-part-1-identifying-your-resources-with-tags-25il</guid>
      <description>&lt;p&gt;Moving systems to the cloud makes a lot of sense operationally – letting the experts take care of the infrastructure, and let us build what we need to make our company successful.&lt;/p&gt;

&lt;p&gt;But this comes at a substantial downside – your monthly cloud bill.  Cloud providers have made it insanely easy to spin up new servers and features, but without careful auditing – it is *very* easy to leave money on the table due to unused or improperly sized resources remaining active on our cloud provider. In this series of posts, we’ll use unSkript to uncover unused assets in your cloud account, and then either alert the team of their existence, or remove them automatically.  &lt;/p&gt;

&lt;p&gt;This of course leads to a catch 22 – how do you know what is safe to remove – and what will bring down production?  In order to better identify our cloud resources, we need a good tagging strategy.&lt;/p&gt;

&lt;p&gt;So, to kick off our cost savings series of posts, we’ll begin  with a discussion on tagging your cloud resources.  &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Tag your resources?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Tags are key:value pairs that describe your cloud function.  With AWS, you can use any value for your key – giving the ultimate in customization.  AWS has a number of &lt;a href="https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html#tag-best-practices"&gt;best practices&lt;/a&gt; for tagging your resources. &lt;/p&gt;

&lt;p&gt;Tagging allows us to easily identify our cloud components and quickly determine what the components do. AWS recommends “overtagging” vs “undertagging.”  In many ways, this is the CloudOps analogy to commenting code.  With a ‘well-tagged” set of resources, auditing each instance becomes easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Perhaps most importantly – it becomes easy to find resources that are no longer needed, allowing for them to be turned off keeping your cloud bill in check.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Tag Strategy&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;There is no definitive set of tags that should be used, but here are some that are often discussed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Environments:&lt;/strong&gt; If your different environments are all in the same Cloud, labeling each object with an Environment key, and values like development, staging, production help you understand where in the deployment process your instances lie.&lt;/li&gt;
&lt;li&gt;*&lt;em&gt;Department: *&lt;/em&gt; Tagging the department that owns/controls the resource has a number of important features:

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;li&gt;The team can track their cloud installs.&lt;/li&gt;
&lt;li&gt;If there is a problem with the instance, the correct team can be easily identified and notified that there is an issue.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;*&lt;em&gt;Cost center: *&lt;/em&gt; Identify teams with higher spends.  More easily break down budgeting for cloud billing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;*&lt;em&gt;Expiration: *&lt;/em&gt; When building out a new system, you may deploy a number of instances for testing.  By setting a sunset date, you can remove any worry of accidentally leaving a cloud instance live – they will all shut down within a few days or weeks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Building your tagging strategy.&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you have not yet built out your strategy, you probably have dozens (hundreds) of untagged cloud objects.  Since there is no way for one person or team to know what each instance is, or even if it is still in use, we need to add tags.&lt;/p&gt;

&lt;p&gt;So the first step is to identify each object, and contact the owners to add tags to the instances.  &lt;/p&gt;

&lt;p&gt;Of course, it would be best to give your team an automated way to add tags to their instances, so that they do not lose a lot of bandwidth complying with the new requirements (with the added benefit that an easy onboarding will make your new tagging policy go faster with less friction.  Here’s how we have done this at unSkript (using unSkript xRunBooks and Actions, or course).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1 Find resources with zero tags&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;unSkript has a built in action called “ &lt;strong&gt;AWS Get Untagged Resources.”&lt;/strong&gt;  This Action calls all EC2 instances that have no tags attached to them,  We can search for this Action and drag it into our workflow.  (In our &lt;a href="https://us.app.unskript.io/"&gt;Free Sandbox&lt;/a&gt;, create a new xRunBook, and then search for the action) &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V130EeJl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://unskript.com/wp-content/uploads/2022/12/adding_action-300x195.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V130EeJl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://unskript.com/wp-content/uploads/2022/12/adding_action-300x195.gif" alt="adding an action (GIF)" width="300" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect your AWS Credentials (learn how to create a &lt;a href="https://docs.unskript.com/unskript-product-documentation/guides/connectors/aws#authentication"&gt;connection to AWS&lt;/a&gt;), and add your Region (you can either change the value in the configurations to the right, OR change the parameters in the top menu – this refers to the variable &lt;em&gt;Region&lt;/em&gt;).  When run, this Action gives a list if instanceIds that have no tags.  We’d like a bit more information, so we’ll edit the xRunBook to look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def aws\_get\_untagged\_resources(handle, region: str) -&amp;gt; List:



    print("region",region)

    ec2Client = handle.client('ec2', region\_name=region)

    #res = aws\_get\_paginator(ec2Client, "describe\_instances", "Reservations")

    res = aws\_get\_paginator(ec2Client, "describe\_instances", "Reservations")

    result = []

    for reservation in res:

        for instance in reservation['Instances']:       

            try:

                #has tags

                tagged\_instance = instance['Tags']

            except Exception as e:

                #no tags

                result.append({"instance":instance['InstanceId'],"type":instance['InstanceType'],"imageId":instance['ImageId'], "launched":instance['LaunchTime'] })

    return result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We make these changes to give a bit more information about each instance that is untagged.  For example:  &lt;/p&gt;

&lt;p&gt;[{‘imageId’: ‘ami-094125af156557ca2’,&lt;/p&gt;

&lt;p&gt;‘instance’: ‘i-049b54f373769f51b’,&lt;/p&gt;

&lt;p&gt;‘launched’: datetime.datetime(2022, 12, 14, 17, 48, 49, tzinfo=tzlocal()),&lt;/p&gt;

&lt;p&gt;‘type’: ‘m1.small’},&lt;/p&gt;

&lt;p&gt;Now we can reach out to the rest of the team to see if anyone knows about this m1.small instance launched on 12/14/22 from a specific AMI.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2: Add tags to found instances:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We now have a list of all of the instanceIds that have no tags.  Now we can use a new action that attaches tags to an EC2 instance to begin the process of bringing the instance into tagging compliance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;This action has 3 inputs: _instanceId_, _Tag\_Key_, and _Tag\_Value_.def aws\_tag\_resources(handle, instanceId: str, tag\_key: str, tag\_value: str, region: str) -&amp;gt; Dict:

    ec2Client = handle.client('ec2', region\_name=region)
    result = {}
    try:
        response = ec2Client.create\_tags(
            Resources=[
                instanceId
            ],
            Tags=[
                {
                    'Key': tag\_key,
                    'Value': tag\_value
                },
            ]
        )
        result = response
    except Exception as error:
        result["error"] = error
    return result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running this Action adds the key:value tag into the EC2 instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: Compliance check&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Finally, we’ll build one last Action that checks all tag Keys against the required list of keys, and returns those instances that are mossing a required tag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def aws\_get\_resources\_out\_of\_compliance(handle, region: str, requiredTags: list) -&amp;gt; List:

    ec2Client = handle.client('ec2', region\_name=region)
    #res = aws\_get\_paginator(ec2Client, "describe\_instances", "Reservations")
    res = aws\_get\_paginator(ec2Client, "describe\_instances", "Reservations")
    result = []
    for reservation in res:
        for instance in reservation['Instances']:       
            try:
                #has tags
                allTags = True
                keyList = []
                tagged\_instance = instance['Tags']
                #print(tagged\_instance)
                #get all the keys for the instance
                for kv in tagged\_instance:
                    key = kv["Key"]
                    keyList.append(key)
                #see if the required tags are represented in the keylist
                #if they are not - the instance is not in compliance
                for required in requiredTags:
                        if required not in keyList:
                            allTags = False
                if not allTags:
                    # instance is not in compliance
                    result.append({"instance":instance['InstanceId'],"type":instance['InstanceType'],"imageId":instance['ImageId'], "launched":instance['LaunchTime'], "tags": tagged\_instance})

            except Exception as e:
                #no tags               result.append({"instance":instance['InstanceId'],"type":instance['InstanceType'],"imageId":instance['ImageId'], "launched":instance['LaunchTime'], "tags": []})
    return result

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Action reads in a list of required keys, and if an instance does not have all of them – it is returned in an out of compliance list.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It has been shown that tagging cloud instances makes troubleshooting faster.  It also helps you identify cloud objects that are no longer in use – helping you to reduce your cloud bill.  For these reasons it makes sense to create a tagging requirement for all instances.&lt;/p&gt;

&lt;p&gt;In this post, we have created a series of Actions that will help you simplify the transition process of bringing all of your existing cloud objects into tagging compliance.&lt;/p&gt;

&lt;p&gt;Feel free to try these Actions in our &lt;a href="https://us.app.unskript.io/"&gt;Free Sandbox&lt;/a&gt;, or using our &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation"&gt;Docker install&lt;/a&gt;.  The actions used in this post will soon be available on Github in the xRunBook &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation/AWS/AWS_Add_Mandatory_tags_to%20EC2.ipynb"&gt;Add Mandatory Tags to EC2.  &lt;/a&gt;Please reach out if you’d like a copy earlier!&lt;/p&gt;

</description>
      <category>cloudcosts</category>
      <category>intelligentautomatio</category>
      <category>tags</category>
    </item>
    <item>
      <title>Cloud Ops Auto Remediation: A Holiday Allegory</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Wed, 07 Dec 2022 16:04:45 +0000</pubDate>
      <link>https://dev.to/unskript/cloud-ops-auto-remediation-a-holiday-allegory-1lk6</link>
      <guid>https://dev.to/unskript/cloud-ops-auto-remediation-a-holiday-allegory-1lk6</guid>
      <description>&lt;p&gt;Auto remediations &lt;strong&gt;are tools that respond to events with automations able to fix, or remediate, the underlying condition.&lt;/strong&gt; Building a demo that features auto remediation fix is hard, because generally modern infrastructure is resilient so keeping it in an error state is difficult. In order to feature an auto-remediation example, we’re going to get a little creative.&lt;/p&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0v-snTO5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AFJ7IGm5Skfq0dPn3TrWlDg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0v-snTO5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2AFJ7IGm5Skfq0dPn3TrWlDg.jpeg" alt="" width="880" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1 id="0e46"&gt;A Holiday Auto Remediation&lt;/h1&gt;

&lt;p id="2b96"&gt;I am writing this post in the first week of December, and let’s face it — holiday music is pretty inescapable this time of year. To describe the auto remediation that we will fix, let’s turn to one of the greatest fiction writers of our time:&lt;/p&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u4zJsd0m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2A3_lBdiVbEMhhGkSq2YD1aA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u4zJsd0m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2A3_lBdiVbEMhhGkSq2YD1aA.jpeg" alt="Stephen King tweets that he dislikes “Holly Jolly Christmas.”" width="880" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p id="391c"&gt;As Mr. King has clearly presented — everyone has a least favorite holiday song, and the playing of this song can . During the holiday season, it is pretty common to use playlists generated by other users on Spotify. Wouldn’t it be great if we could auto remediate the playing of our disliked holiday songs automatically — without any intervention?&lt;/p&gt;

&lt;p id="95c4"&gt;This is a great example of building auto remediation — an xRunBook in unSkript that identifies the song playing, and if there is an “error state” (a song on the blocklist), the RunBook will correct this issue without human intervention.&lt;/p&gt;

&lt;h1 id="4cfb"&gt;Auto Remediation in Spotify&lt;/h1&gt;

&lt;p id="61f1"&gt;In this post, we will build an auto remediation RunBook for your Spotify account.&lt;/p&gt;

&lt;ol&gt;
&lt;li id="b46a"&gt;We will create a Spotify app that connects with unSkript.&lt;/li&gt;
&lt;li id="5417"&gt;Curate a block list of songs.&lt;/li&gt;
&lt;li id="dd15"&gt;We will build a xRunBook that checks to see if Spotify is playing.&lt;/li&gt;
&lt;li id="bb57"&gt;If Spotify is playing a song on the block list, unSkript will automatically remediate the issue by skipping to the next track.&lt;/li&gt;
&lt;li id="be91"&gt;We can then place this xRunBook on a schedule to run every minute- ensuring that we’ll only hear a few seconds of our least favourite songs (at least when listening on *our* Spotify account). Note: This last step is only possible in the &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation" rel="noopener ugc nofollow"&gt;Open Source Docker build&lt;/a&gt;, or in a SAAS install of unSkript. This won’t work in the sandbox &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SJWlOO03--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://s.w.org/images/core/emoji/14.0.0/72x72/1f641.png" alt="🙁" width="72" height="72"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p id="7158"&gt;Let’s get started:&lt;/p&gt;

&lt;h1 id="7def"&gt;Build a Spotify app&lt;/h1&gt;

&lt;p id="f843"&gt;In order to interface with Spotify, we &lt;a href="https://developer.spotify.com/dashboard/applications" rel="noopener ugc nofollow"&gt;create an app&lt;/a&gt; at Spotify.&lt;/p&gt;

&lt;p id="5e3b"&gt;When you create an app, you’ll need to submit a redirect url. I used &lt;a href="https://unskript.com/" rel="noopener ugc nofollow"&gt;unskript.com&lt;/a&gt; in my example. Once you generate the app, you’ll get a clientId and clientSecret that you will need later to interface with your xRunBook. You can also add your account email as an authorized user of the application:&lt;/p&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WHOVMFrZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2A5AIO8ckjaDV3amJB0v9_vA.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WHOVMFrZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1400/1%2A5AIO8ckjaDV3amJB0v9_vA.jpeg" alt="Screenshot of the Spotify app" width="880" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p id="0375"&gt;The name and description of my Spotify application gives away *my* reasoning for this app.&lt;/p&gt;

&lt;h1 id="6a7d"&gt;Building the xRunBook&lt;/h1&gt;

&lt;p id="7e2c"&gt;NOTE: For full automation, install &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation" rel="noopener ugc nofollow"&gt;unSkript locally using Docker&lt;/a&gt;. If you just want to follow along the tutorial, you can use our &lt;a href="https://us.app.unskript.io/" rel="noopener ugc nofollow"&gt;free Sandbox&lt;/a&gt; to run the xRunBook manually. (If you are using the Sandbox, complete the tutorial.)&lt;/p&gt;

&lt;p id="7f0c"&gt;Now we are ready to create our xRunBook.&lt;/p&gt;

&lt;ul&gt;
&lt;li id="28c0"&gt;In Sandbox, Click xRunBook, and then “+Create”, and connect your RunBook to the proxy you created in the tutorial.&lt;/li&gt;
&lt;li id="49f4"&gt;In Docker, there are instructions to create a new xRunBook in the README file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p id="44bc"&gt;xRunBooks are based on Jupyter Notebooks. Each action is an independent Python application. When the RunBook is executed, each Action is run in order. I have pasted the code for each action below. You’ll need to create a new Action for each code snippet. Do this by Clicking the + Add button at the top, and choosing “Action.”&lt;/p&gt;

&lt;p id="a218"&gt;&lt;strong&gt;Action 1: Install Spotipy&lt;/strong&gt;&lt;/p&gt;

&lt;p id="b47a"&gt;We’re going to use a Python library to interact with Spotify. In your first action:&lt;/p&gt;

&lt;pre&gt;&lt;span id="0ea3"&gt;!pip install spotipy - quiet&lt;/span&gt;&lt;/pre&gt;

&lt;p id="a59d"&gt;&lt;strong&gt;Action 2: Add our ClientIds.&lt;/strong&gt;&lt;/p&gt;

&lt;p id="fee2"&gt;In the OSS, and Sandbox, there is no built in Secret vault. so we’ll put them here. Make sure the redirect url matches what you placed in your Spotify application/&lt;/p&gt;

&lt;pre&gt;&lt;span id="3a77"&gt;client_id = &lt;span&gt;"&amp;lt;client ID from Spotify&amp;gt;"&lt;/span&gt;
client_secret=&lt;span&gt;"&amp;lt;secret from Spotify"&lt;/span&gt;
client_redirect=&lt;span&gt;"https://unskript.com"&lt;/span&gt;&lt;/span&gt;&lt;/pre&gt;

&lt;p id="fea9"&gt;&lt;strong&gt;Action 3: Connect to Redis&lt;/strong&gt;&lt;/p&gt;

&lt;p id="f195"&gt;Note: not needed for Docker, and will not work for Sandbox, but this will work in unSkript SAAS:&lt;/p&gt;

&lt;pre&gt;&lt;span id="28c8"&gt;&lt;span&gt;import&lt;/span&gt; redis
redis = redis.Redis(host=&lt;span&gt;'&amp;lt;redis-host&amp;gt;'&lt;/span&gt;, port=&lt;span&gt;6379&lt;/span&gt;, db=&lt;span&gt;0&lt;/span&gt;)
redis.&lt;span&gt;set&lt;/span&gt;(&lt;span&gt;'foo'&lt;/span&gt;, &lt;span&gt;'bar'&lt;/span&gt;)
&lt;span&gt;True&lt;/span&gt;
redis.get(&lt;span&gt;'foo'&lt;/span&gt;)
&lt;span&gt;b'bar'&lt;/span&gt;
&lt;span&gt;print&lt;/span&gt;(redis)&lt;/span&gt;&lt;/pre&gt;

&lt;p id="9e10"&gt;Spotipy stores the authentication in a local cache — which is fine in the unSkript Docker instance.&lt;/p&gt;

&lt;p id="8730"&gt;The Sandbox does not have a local cache, nor a Redis instance, so this xRunBook cannot be run on a schedule as a result.&lt;/p&gt;

&lt;p id="1209"&gt;In our SAAS version, a Redis database can be attached to the xRunBook to ensure that the credentials are stored locally, and can be reused.&lt;/p&gt;

&lt;p id="9414"&gt;&lt;strong&gt;Action 4: Find SongIDs&lt;/strong&gt;&lt;/p&gt;

&lt;p id="c758"&gt;Let’s do some searches in the Spotify database to extract the songIds we want to add to our blocklist. Here we are searching for songs sung by Mariah Carey:&lt;/p&gt;

&lt;pre&gt;&lt;span id="73a8"&gt;&lt;span&gt;import&lt;/span&gt; spotipy
&lt;span&gt;from&lt;/span&gt; spotipy.oauth2 &lt;span&gt;import&lt;/span&gt; SpotifyClientCredentials

scope = &lt;span&gt;"user-library-read user-modify-playback-state user-read-playback-state"&lt;/span&gt;

sp = spotipy.Spotify(auth_manager=SpotifyClientCredentials(client_id=client_id,
                                               client_secret=client_secret))

results = sp.search(q=&lt;span&gt;'mariah carey'&lt;/span&gt;, limit=&lt;span&gt;5&lt;/span&gt;)
&lt;span&gt;#print(results)&lt;/span&gt;
&lt;span&gt;for&lt;/span&gt; idx, track &lt;span&gt;in&lt;/span&gt; &lt;span&gt;enumerate&lt;/span&gt;(results[&lt;span&gt;'tracks'&lt;/span&gt;][&lt;span&gt;'items'&lt;/span&gt;]):
    &lt;span&gt;print&lt;/span&gt;(idx, track[&lt;span&gt;'artists'&lt;/span&gt;][&lt;span&gt;0&lt;/span&gt;][&lt;span&gt;'name'&lt;/span&gt;], &lt;span&gt;" "&lt;/span&gt;, track[&lt;span&gt;'name'&lt;/span&gt;], &lt;span&gt;" "&lt;/span&gt; ,track[&lt;span&gt;'id'&lt;/span&gt;])&lt;/span&gt;&lt;/pre&gt;

&lt;p id="d33a"&gt;The results for this Action are:&lt;/p&gt;

&lt;pre&gt;&lt;span id="8994"&gt;&lt;span&gt;0&lt;/span&gt; Mariah Carey &lt;span&gt;All&lt;/span&gt; &lt;span&gt;I&lt;/span&gt; Want for Christmas Is You &lt;span&gt;0&lt;/span&gt;bYg9bo50gSsH3LtXe2SQn
&lt;span&gt;1&lt;/span&gt; Mariah Carey Fantasy &lt;span&gt;6&lt;/span&gt;xkryXuiZU360Lngd4sx13
&lt;span&gt;2&lt;/span&gt; Mariah Carey Christmas (Baby Please Come Home) &lt;span&gt;3&lt;/span&gt;PIDciSFdrQxSQSihim3hN
&lt;span&gt;3&lt;/span&gt; Mariah Carey We Belong Together &lt;span&gt;3&lt;/span&gt;LmvfNUQtglbTrydsdIqFU
&lt;span&gt;4&lt;/span&gt; Mariah Carey Fantasy (feat. O.D.B.) &lt;span&gt;2&lt;/span&gt;itAOPLerxnnc8KXHMqPWu&lt;/span&gt;&lt;/pre&gt;

&lt;p id="e795"&gt;We want to (ok, I want to) block the song with ID: 0bYg9bo50gSsH3LtXe2SQn.&lt;/p&gt;

&lt;p id="4758"&gt;&lt;strong&gt;Action 5: Build a block list&lt;/strong&gt;&lt;/p&gt;

&lt;p id="036b"&gt;This Action defines the array of songs we want to block.&lt;/p&gt;

&lt;pre&gt;&lt;span id="5640"&gt;songList = [&lt;span&gt;"0bYg9bo50gSsH3LtXe2SQn"&lt;/span&gt;, 
            &lt;span&gt;"4iHNK0tOyZPYnBU7nGAgpQ"&lt;/span&gt;,
            &lt;span&gt;"0SorhWEyl6wkQ6vYAQt2D0"&lt;/span&gt;]&lt;/span&gt;&lt;/pre&gt;

&lt;p id="e867"&gt;&lt;strong&gt;Action 6: Authenticate the user&lt;/strong&gt;&lt;/p&gt;

&lt;p id="09fd"&gt;This requests access for a user at Spotify. We need to read the playback state (is Spotify playing?), and we need permission to modify the state (skip the song!).&lt;/p&gt;

&lt;p id="f3ff"&gt;We have commented out the Redis cache_handler. If you are using the SAAS version of unSkript — remove the comment.&lt;/p&gt;

&lt;pre&gt;&lt;span id="e3b1"&gt;&lt;span&gt;import&lt;/span&gt; spotipy
&lt;span&gt;from&lt;/span&gt; spotipy.oauth2 &lt;span&gt;import&lt;/span&gt; SpotifyOAuth
&lt;span&gt;import&lt;/span&gt; json

scope = &lt;span&gt;"user-library-read user-modify-playback-state user-read-playback-state user-read-recently-played"&lt;/span&gt;

spUser = spotipy.Spotify(auth_manager=SpotifyOAuth(client_id=client_id,
                                               client_secret=client_secret,
                                               redirect_uri=client_redirect,
                                              &lt;span&gt;# cache_handler=spotipy.cache_handler.RedisCacheHandler(redis),&lt;/span&gt;
                                               scope=scope,
                                               open_browser=&lt;span&gt;False&lt;/span&gt;))&lt;/span&gt;&lt;/pre&gt;

&lt;p id="0f30"&gt;&lt;strong&gt;Action 7: Skip the track&lt;/strong&gt;&lt;/p&gt;

&lt;p id="ae40"&gt;This step will be interactive. When this action is run, you’ll be asked to visit a url (that does a redirect) and then paste in the redirected url. This url has your authentication token in it. Once this is done, a token is stored locally that is valid for one hour. Every time the xRunBook is run, the token is re-authenticated for another hour.&lt;/p&gt;

&lt;pre&gt;&lt;span id="617a"&gt;&lt;span&gt;#get current track&lt;/span&gt;
currentTrack = spUser.current_user_playing_track()

&lt;span&gt;## only test the playback if there is currently a song playing&lt;/span&gt;
&lt;span&gt;if&lt;/span&gt; currentTrack &lt;span&gt;is&lt;/span&gt; &lt;span&gt;not&lt;/span&gt; &lt;span&gt;None&lt;/span&gt;:
    &lt;span&gt;#&lt;/span&gt;
    track = currentTrack[&lt;span&gt;"item"&lt;/span&gt;][&lt;span&gt;"uri"&lt;/span&gt;]
    &lt;span&gt;print&lt;/span&gt;(track)
    &lt;span&gt;#remove 'spotify:track:' from front of string to get the ID&lt;/span&gt;
    track = track[&lt;span&gt;14&lt;/span&gt;:]
    &lt;span&gt;print&lt;/span&gt;(track)

    &lt;span&gt;# all i want for christmas is you. spotify:track:0bYg9bo50gSsH3LtXe2SQn&lt;/span&gt;
    songs_i_hate = songList

    &lt;span&gt;for&lt;/span&gt; song &lt;span&gt;in&lt;/span&gt; songs_i_hate:
        &lt;span&gt;print&lt;/span&gt;(&lt;span&gt;"song"&lt;/span&gt;, song)
        &lt;span&gt;if&lt;/span&gt; track == song:
          &lt;span&gt;print&lt;/span&gt;(&lt;span&gt;"ahhh save us"&lt;/span&gt;)
          spUser.next_track()
          &lt;span&gt;break&lt;/span&gt;
        &lt;span&gt;else&lt;/span&gt;:
          &lt;span&gt;print&lt;/span&gt;(&lt;span&gt;"its all good"&lt;/span&gt;)
&lt;span&gt;else&lt;/span&gt;:
    &lt;span&gt;print&lt;/span&gt;(&lt;span&gt;"the music is off"&lt;/span&gt;)&lt;/span&gt;&lt;/pre&gt;

&lt;p id="9b61"&gt;This completes the xRunBook creation. Save the RunBook (by closing it, and then selecting Save)&lt;/p&gt;

&lt;p id="26bf"&gt;&lt;strong&gt;Scheduling the RunBook&lt;/strong&gt;&lt;/p&gt;

&lt;p id="fe91"&gt;In the SAAS (and Sandbox), it is possible to schedule your xRunbooks. By scheduling this xRunBook to run every minute — you can ensuyre that you’ll never have to listen to more than 59 seconds of the songs you dislike.&lt;/p&gt;

&lt;h1 id="5fbe"&gt;A Holiday Allegory&lt;/h1&gt;

&lt;p id="7ac0"&gt;An Allegory is a metaphor that symbolizes an idea or message. Many stories told at Christmas time are allegories (like Dicken’s a Christmas Carol). In this post, we have used Spotify playlists as an allegory to a Cloud System in distress. By auto-skipping songs, our unSkript RunBook is automatically solving a situation without involving a human (potentially paging them out of a “long winder’s nap.”)&lt;/p&gt;

&lt;p id="fc0e"&gt;By building a “Skip the horrible track” auto remediation, we show the power of unSkript, and also potentially save the family Christmas by avoiding arguments over the Christmas playlist.&lt;/p&gt;

&lt;p id="2c41"&gt;Interested in learning more about how unSkript can help you build internal auto remediation tooling for your team? Check out our &lt;a href="https://us.app.unskript.io/signup" rel="noopener ugc nofollow"&gt;free trial&lt;/a&gt;, star our &lt;a href="https://github.com/unskript/Awesome-CloudOps-Automation" rel="noopener ugc nofollow"&gt;GitHub repository&lt;/a&gt; of xRunBooks and Actions, or join our &lt;a href="https://communityinviter.com/apps/cloud-ops-community/awesome-cloud-automation" rel="noopener ugc nofollow"&gt;Slack Community&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>cloudops</category>
      <category>devops</category>
      <category>spotify</category>
      <category>christmasmusic</category>
    </item>
    <item>
      <title>SHH! Conductor has secrets!</title>
      <dc:creator>Doug Sillars</dc:creator>
      <pubDate>Tue, 09 Aug 2022 00:00:00 +0000</pubDate>
      <link>https://dev.to/orkes/shh-conductor-has-secrets-1o36</link>
      <guid>https://dev.to/orkes/shh-conductor-has-secrets-1o36</guid>
      <description>&lt;p&gt;We are really excited to announce the latest feature to Orkes' cloud hosted version of Netflix Conductor. It is now no longer a secret - we support the use of secrets in your workflow definitions! Now you can be certain that your secret keys, tokens and values that you use in your workflows are secure!&lt;/p&gt;

&lt;h2&gt;
  
  
  What do you mean by secrets?​
&lt;/h2&gt;

&lt;p&gt;In many of applications today, interaction with third party applications is common. Typically this will require some form of authentication to gain access. When you are coding, there is a concept of a local secure store where sensitive values are kept (and thus not shared to GitHub etc.) This prevents accidental disclosure of your secrets when posting code to GitHub or when sharing your code to other teams.&lt;/p&gt;

&lt;p&gt;Until now, there has been no way to securely use any sensitive value in a Conductor workflow. Just about every developer has a story of accidentally posting a sensitive value on GitHub. Here's my story of accidentally sharing a sensitive value with a Conductor workflow:&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://orkes.io/content/docs/codelab/orderfulfillment5#adding-a-error-flow"&gt;&lt;code&gt;order fulfillment&lt;/code&gt; codelab&lt;/a&gt;, the failure workflow has a Slack token that is unique, and if publicly accessible could be used to SPAM a Slack channel. When writing the tutorial, I shared the task definition in the docs - &lt;em&gt;with&lt;/em&gt; the Slack token.&lt;/p&gt;

&lt;p&gt;Slack caught this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dvAmVU-t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://orkes.io/content/img/slack_oops.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dvAmVU-t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://orkes.io/content/img/slack_oops.jpg" alt="aaccidently shared a hardcoded slack token" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Super embarrassing, but no serious consequences (in this instance).&lt;/p&gt;

&lt;h2&gt;
  
  
  Don't let this happen to you! ​
&lt;/h2&gt;

&lt;p&gt;In Orkes hosted instances of Netflix Conductor, we now feature secrets. You can save your secret on the Conductor server, and Conductor will &lt;em&gt;use&lt;/em&gt; the value when required, but will not expose the value in any outputs from the workflow.&lt;/p&gt;

&lt;p&gt;It is a very easy setup - simply login to your instance of Netflix Conductor at Orkes (or try our &lt;a href="https://play.orkes.io"&gt;Playground&lt;/a&gt; for free!). In the left navigation, click &lt;code&gt;Secrets&lt;/code&gt;. This will lead to a table of your secrets (which is probably empty).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W_32vKvu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://orkes.io/content/img/secrets_dashboard.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W_32vKvu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://orkes.io/content/img/secrets_dashboard.jpg" alt="the Orkes Cloud Secrets dashboard" width="880" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;code&gt;Add Secret&lt;/code&gt;, give it a name, paste in your value, and press save. That's all there is to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using your secret​
&lt;/h2&gt;

&lt;p&gt;In Conductor workflows, secrets use a similar format to other variables. For example, to reference an input variable called &lt;code&gt;address&lt;/code&gt; you'd use &lt;code&gt;${workflow.input.address}&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you had a secret called &lt;code&gt;Stripe_api_key&lt;/code&gt;, you reference this value with the variable &lt;code&gt;${workflow.secrets.Stripe_api_key}&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  An example​
&lt;/h2&gt;

&lt;p&gt;Accessing GitHubs APIs requires an API token. In the following HTTP task, I call a GitHub API, and can reference the secret &lt;code&gt;Doug_github&lt;/code&gt; for the authorization header.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "name": "Get_repo_details", "taskReferenceName": "Get_repo_details_ref", "inputParameters": { "http_request": { "uri": "https://api.github.com/repos/${workflow.input.gh_account}/${workflow.input.gh_repo}", "method": "GET", "headers": { "Authorization": "token ${workflow.secrets.Doug_github}", "Accept": "application/vnd.github.v3.star+json" }, "readTimeOut": 2000, "connectionTimeOut": 2000 } }, "type": "HTTP", "decisionCases": {}, "defaultCase": [], "forkTasks": [], "startDelay": 0, "joinOn": [], "optional": false, "defaultExclusiveJoinTask": [], "asyncComplete": false, "loopOver": [] }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;When this workflow is run, other variables are replaced, but the value of the secret remains a secret. Note that in the uri, &lt;code&gt;${workflow.input.gh_account}/${workflow.input.gh_repo}&lt;/code&gt; is replaced with &lt;code&gt;netflix/conductor&lt;/code&gt;, but the authorization header remains obfuscated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ "headers": { "Authorization": "token ${workflow.secrets.Doug_github}", "Accept": "application/vnd.github.v3.star+json" }, "method": "GET", "readTimeOut": 2000, "uri": "https://api.github.com/repos/netflix/conductor", "connectionTimeOut": 2000}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;span&gt;&lt;/span&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion​
&lt;/h2&gt;

&lt;p&gt;Secrets have been one of the most requested features for Netflix Conductor when we speak to developers, and for that reason we're excited to announce this launch. We cannot wait to hear about how this release is making workflow development more secure and opening new avenues of development - now that these values can be securely stored.&lt;/p&gt;

&lt;p&gt;Give them a try in the &lt;a href="https://play.orkes.io"&gt;Orkes Playground&lt;/a&gt;, and we w ould love to hear what you think in our &lt;a href="https://join.slack.com/t/orkes-conductor/shared_invite/zt-xyxqyseb-YZ3hwwAgHJH97bsrYRnSZg"&gt;Slack&lt;/a&gt; or &lt;a href="https://discord.com/invite/P6vVt9xKSQ"&gt;Discord&lt;/a&gt; communities&lt;/p&gt;

</description>
      <category>netflixconductor</category>
      <category>orchestration</category>
      <category>security</category>
      <category>2022</category>
    </item>
  </channel>
</rss>
