<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Christopher Wilcox</title>
    <description>The latest articles on DEV Community by Christopher Wilcox (@crwilcox).</description>
    <link>https://dev.to/crwilcox</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/crwilcox"/>
    <language>en</language>
    <item>
      <title>Cron-ing in the Cloud: How to use Cloud Scheduler to automate routine tasks</title>
      <dc:creator>Christopher Wilcox</dc:creator>
      <pubDate>Thu, 18 Feb 2021 15:00:00 +0000</pubDate>
      <link>https://dev.to/googlecloud/cron-ing-in-the-cloud-how-to-use-cloud-scheduler-to-automate-routine-tasks-5gph</link>
      <guid>https://dev.to/googlecloud/cron-ing-in-the-cloud-how-to-use-cloud-scheduler-to-automate-routine-tasks-5gph</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XLzZksCE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-18-cron-ing-in-the-cloud/scheduler_functions.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XLzZksCE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-18-cron-ing-in-the-cloud/scheduler_functions.png" alt="Cloud Scheduler triggered Cloud Functions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Last week I wrote about &lt;a href="https://chriswilcox.dev/blog/2021/02/10/How-I-tamed-my-GitHub-notifications.html"&gt;taming my GitHub notifications&lt;/a&gt; and I thought I would follow that post up by talking about automating the automation - or - how I use the cloud as my own personal computer :)&lt;/p&gt;

&lt;p&gt;I currently do not have a desktop style computer. As part of traveling and liking the ability to move about my house as I do my work, I have moved to relying on laptop computers. There is a downside to this though: I don’t really have a place to easily run scheduled tasks as I never know if the computer will be on at that time. Enter &lt;a href="https://cloud.google.com/scheduler"&gt;Cloud Scheduler&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Running scripts on a schedule, or an external trigger, is a common practice and Google Cloud Scheduler makes this possible while not having to be sure I have a computer running in my home. Further, the free tier of Cloud Scheduler allows you to specify &lt;a href="https://cloud.google.com/scheduler/pricing"&gt;3 jobs for free&lt;/a&gt;, no matter how many times the job is run. Each job after that is USD$0.10 each. Do note you may also incur costs for excess &lt;a href="https://cloud.google.com/functions/pricing"&gt;Cloud Functions&lt;/a&gt; execution time, database costs, etc.&lt;/p&gt;

&lt;p&gt;Using my script I created for managing my GitHub notifications as motivation, let’s break down how to use Cloud Scheduler and &lt;a href="https://cloud.google.com/functions"&gt;Google Cloud Functions&lt;/a&gt; to routinely execute code. Also, shoutout to my coworkers &lt;a href="https://twitter.com/glasnt"&gt;Katie McLaughlin&lt;/a&gt; and &lt;a href="https://twitter.com/martinomander"&gt;Martin Omander&lt;/a&gt; for their video about this very subject. &lt;a href="https://www.youtube.com/watch?v=XIwbIimM49Y"&gt;Check it out on YouTube&lt;/a&gt; if that style is easier for you. They leverage &lt;a href="https://cloud.google.com/run"&gt;Cloud Run&lt;/a&gt; instead of Cloud Functions, but it is very similar to this post.&lt;/p&gt;

&lt;h1&gt;
  
  
  An overview
&lt;/h1&gt;

&lt;p&gt;So, we already have a bit of code running locally. At a high-level to get this in the cloud we need to&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Wrap it in an entry point that can be invoked by a Cloud Function&lt;/li&gt;
&lt;li&gt;Deploy the code to Cloud Functions using &lt;a href="https://cloud.google.com/sdk/docs"&gt;&lt;code&gt;gcloud&lt;/code&gt;&lt;/a&gt;, a CLI that helps to interact with &lt;a href="https://cloud.google.com/"&gt;Google Cloud&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Create a Cloud Scheduler job that triggers the Cloud Function on a routine.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Modifying the script to work with Cloud Functions
&lt;/h1&gt;

&lt;p&gt;From my &lt;a href="https://chriswilcox.dev/blog/2021/02/10/How-I-tamed-my-GitHub-notifications.html"&gt;last post&lt;/a&gt;, we have the following code to start from.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import getpass
from github3 import login
import keyring

access_token = keyring.get_password('github', 'notifications')
if access_token == None:
   access_token = getpass.getpass("Please enter your GitHub access token (will be saved to keyring)")
   keyring.set_password('github', 'notifications', access_token)

gh = login(token=access_token)

notifications_to_mark = []
for notification in gh.notifications():
   url_segments = notification.subject['url'].split('/')
   number = url_segments[-1]
   repo = url_segments[-3]
   org = url_segments[-4]

   try:
       if notification.subject['type'] == 'PullRequest':
           pr = gh.repository(org, repo).pull_request(int(number))
           if pr.state == 'closed':
               notifications_to_mark.append(notification)

       elif notification.subject['type'] == 'Issue':
           issue = gh.repository(org, repo).issue(int(number))
           if issue.state == 'closed':
               notifications_to_mark.append(notification)
   except Exception as e:
       print(e)

unique_repos = set([n.subject['url'][:-1] for n in notifications_to_mark])
print(f"Found {len(notifications_to_mark)} to mark read across {len(unique_repos)} repositories")

input("Press ENTER to mark items read.")

for n in notifications_to_mark:
   print(f"Marking {n.subject['url']}")
   n.mark()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are a few things we need to change in this code to make it work as a Cloud Function:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;This code is formatted as a runnable script, not a python module.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloud Functions cannot request access to keyring while running&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As Cloud Functions run an HTTP endpoint, they expect a response.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create an entrypoint for cloud functions
&lt;/h2&gt;

&lt;p&gt;The next step to take is to move this from being a script that is exec’d on the command line, and move it to a script that a Cloud Function runner can invoke. Cloud Functions expects a Python function so the simplest thing to do is move the code that was previously written under a function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def mark_read(request):
    gh = login(token=access_token)
    notifications_to_mark = []
    for notification in gh.notifications():
        ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Replacing keyring use with a Cloud Functions environment variable
&lt;/h2&gt;

&lt;p&gt;As I initially created this to run on my laptop I leveraged a package called&lt;code&gt;keyring&lt;/code&gt; to store my access token. However, this requires the user to enter their password to gain access to the secret. Since moving to Cloud Functions makes this non-interactive, I moved the secret to an environment variable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;access_token = keyring.get_password('github', 'notifications')
gh = login(token=access_token)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  After
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;access_token = os.getenv("GITHUB_ACCESS_TOKEN") # 🤔
gh = login(token=access_token)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might be saying to yourself “but wait, that is just a plain text environment variable. Should I really just put that like that in the cloud? That seems bad”.&lt;/p&gt;

&lt;p&gt;And that would be a good instinct. While you could do that, it would be best to instead use an encrypted secret. Luckily this is relatively straightforward using secret manager. Let’s add a new function, &lt;code&gt;get_access_token&lt;/code&gt; to abstract that code from the notifications processor.&lt;/p&gt;

&lt;p&gt;Later on this post discusses how to add that secret to&lt;a href="https://cloud.google.com/secret-manager#:~:text=Secret%20Manager%20is%20a%20secure,audit%20secrets%20across%20Google%20Cloud."&gt;Google Cloud Secret Manager&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from google.cloud import secretmanager

def get_access_token():
    if "GITHUB_ACCESS_TOKEN_LOCATION" in os.environ:
        client = secretmanager.SecretManagerServiceClient()
        name=os.getenv("GITHUB_ACCESS_TOKEN_LOCATION")
        access_token = client.access_secret_version(name=name).payload.data.decode("UTF-8")
    elif "GITHUB_ACCESS_TOKEN" in os.environ:
        # Allow the use of an environment variable. Though it would be better
        # if a Cloud Secret was used.
        access_token = os.getenv("GITHUB_ACCESS_TOKEN")

    return access_token

def mark_read(request):
    gh = login(token=get_access_token())
   ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Add a return value
&lt;/h2&gt;

&lt;p&gt;For a Cloud Function to be considered ‘successful’ it should return content. For this reason, I change the final print to return that information. Whatever is returned will be output as the response to the POST request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    return f"Marked {len(notifications_to_mark)} read across {len(unique_repos)} repositories"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, print statements are captured by &lt;a href="https://cloud.google.com/logging"&gt;Google Cloud Logging&lt;/a&gt;. Cloud Functions includes simple runtime logging and will gather &lt;code&gt;stdout&lt;/code&gt; and &lt;code&gt;stderr&lt;/code&gt;. So, while a logging library could be used, &lt;code&gt;print&lt;/code&gt; statements are sufficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  The End Result
&lt;/h2&gt;

&lt;p&gt;So after these changes, the script is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# requirements.txt
github3.py==1.3.0
google-cloud-secret-manager==2.2.0


import os
from github3 import login
from google.cloud import secretmanager

def get_access_token():
    if "GITHUB_ACCESS_TOKEN_LOCATION" in os.environ:
        client = secretmanager.SecretManagerServiceClient()
        name = os.getenv("GITHUB_ACCESS_TOKEN_LOCATION")
        access_token = client.access_secret_version(name=name).payload.data.decode(
            "UTF-8"
        )
    elif "GITHUB_ACCESS_TOKEN" in os.environ:
        # Allow the use of an environment variable. Though it would be better
        # if a Cloud Secret was used.
        access_token = os.getenv("GITHUB_ACCESS_TOKEN")

    return access_token

def mark_read(request):
    gh = login(token=get_access_token())
    notifications_to_mark = []
    for notification in gh.notifications():
        # url is of form https://api.github.com/repos/googleapis/nodejs-dialogflow/pulls/264'
        # state change indicates pr/issue status change
        url_segments = notification.subject["url"].split("/")
        number = url_segments[-1]
        repo = url_segments[-3]
        org = url_segments[-4]
        try:
            if notification.subject["type"] == "PullRequest":
                pr = gh.repository(org, repo).pull_request(int(number))
                if pr.state == "closed":
                    notifications_to_mark.append(notification)

            elif notification.subject["type"] == "Issue":
                issue = gh.repository(org, repo).issue(int(number))
                if issue.state == "closed":
                    notifications_to_mark.append(notification)
        except Exception as e:
            print(e)

    unique_repos = set([n.subject["url"][:-1] for n in notifications_to_mark])
    print(
        f"Found {len(notifications_to_mark)} to mark read across {len(unique_repos)} repositories"
    )

    for n in notifications_to_mark:
        print(f"Marking {n.subject['url']}")
        n.mark()

    response = f"Marked {len(notifications_to_mark)} read across {len(unique_repos)} repositories"
    print(response)
    return response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Using gcloud to interact with Google Cloud
&lt;/h1&gt;

&lt;p&gt;Now that we have a Cloud Function ready Python module, we need to deploy the code to a Cloud Function. While you could use the &lt;a href="http://console.cloud.google.com/"&gt;Cloud Console&lt;/a&gt;to do everything below, I’ll be showing how to use &lt;code&gt;gcloud&lt;/code&gt; to do this.&lt;/p&gt;

&lt;p&gt;You can find instructions for installing gcloud on a variety of platforms &lt;a href="https://cloud.google.com/sdk/docs/install"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First thing to do is to create a few environment variables to be used by future commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;REGION=us-central1
PROJECT_ID=my-project
GITHUB_ACCESS_TOKEN=your-access-token
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, before we move on, it is a good idea to ensure you are logged in and addressing the correct project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud auth login
gcloud config set project $PROJECT_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You don’t need to run this every time before running commands but, especially if you use multiple cloud projects, it is good to ensure you are targeting the right project.&lt;/p&gt;

&lt;h1&gt;
  
  
  Permissions
&lt;/h1&gt;

&lt;p&gt;Whenever deploying anything to the cloud, it is good to start by thinking about access policies. In Google Cloud, we can create service accounts with limited permissions. For this instance, the service account needs the ability to invoke a Cloud Function.&lt;/p&gt;

&lt;p&gt;This will be used later when configuring Cloud Scheduler.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud iam service-accounts create gh-notifications-sa \
   --display-name "gh notifications service account"

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
   --member serviceAccount:gh-notifications-sa@${PROJECT_ID}.iam.gserviceaccount.com \
   --role roles/cloudfunctions.invoker

gcloud projects add-iam-policy-binding ${PROJECT_ID} \
   --member serviceAccount:gh-notifications-sa@${PROJECT_ID}.iam.gserviceaccount.com \
   --role roles/secretmanager.secretAccessor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Add a secret to Secret Manager
&lt;/h1&gt;

&lt;p&gt;Earlier a function was added to use a secret rather than a local environment variable.&lt;/p&gt;

&lt;p&gt;Now it’s time to add that secret to Google Cloud.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "s3cr3t" | gcloud secrets create github-access-token --data-file=- --replication-policy=automatic
GITHUB_ACCESS_TOKEN_LOCATION=="projects/$PROJECT_ID/secrets/github-access-token/versions/latest"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Deploying your Cloud Function
&lt;/h1&gt;

&lt;p&gt;The next step, now that we have a Service Account, is to deploy the cloud function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud functions deploy scrub-github-notifications \
      --trigger-http \
      --region us-central1 \
      --runtime python39 \
      --entry-point mark_read \
      --service-account gh-notifications-sa@${PROJECT_ID}.iam.gserviceaccount.com \
      --no-allow-unauthenticated \
      --set-env-vars GITHUB_ACCESS_TOKEN_LOCATION=${GITHUB_ACCESS_TOKEN_LOCATION}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Breaking this down, we use an HTTP Trigger, to run a Python 3.9 function, in the us-central1 region. This function doesn’t allow unauthenticated access which prevents callers that aren’t the specified service account from invoking it. The command also provides an environment variable to configure and the name of the entry-point.&lt;/p&gt;

&lt;h1&gt;
  
  
  Creating a Cloud Scheduler
&lt;/h1&gt;

&lt;p&gt;Next the Cloud Scheduler that invokes the Cloud Function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud scheduler jobs create http scrub-gh-notifications-job \
  --description "Scrub GitHub Notifications hourly working hours" \
  --schedule "0 07-18 * * *" \
  --time-zone "US/Pacific" \
  --uri "https://${REGION}-${PROJECT_ID}.cloudfunctions.net/scrub-github-notifications" \
  --http-method POST \
  --oidc-service-account-email gh-notifications-sa@${PROJECT_ID}.iam.gserviceaccount.com \
  --message-body '{"name": "Triggered by Cloud Scheduler"}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The schedule arguments
&lt;/h2&gt;

&lt;p&gt;Let’s start by looking at the arguments related to frequency.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  --schedule "0 07-18 * * *"
  --time-zone "US/Pacific"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cloud Scheduler uses &lt;a href="https://cloud.google.com/scheduler/docs/configuring/cron-job-schedules"&gt;unix-cron&lt;/a&gt; format. The above says “between 7AM and 6PM, in the specified timezone, run every hour”&lt;/p&gt;

&lt;h2&gt;
  
  
  The trigger
&lt;/h2&gt;

&lt;p&gt;Now, on to what we are triggering:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  --uri "https://${REGION}-${PROJECT_ID}.cloudfunctions.net/scrub-github-notifications" \
  --http-method POST \
  --oidc-service-account-email gh-notifications-sa@${PROJECT_ID}.iam.gserviceaccount.com \
  --message-body '{"name": "Triggered by Cloud Scheduler"}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will make a POST Request to the specified &lt;code&gt;uri&lt;/code&gt;, using the service account. It also sends a &lt;code&gt;message-body&lt;/code&gt; which, since in this case we don’t require input, just states what triggered the function.&lt;/p&gt;

&lt;h1&gt;
  
  
  And just like that, my other computer is the cloud
&lt;/h1&gt;

&lt;p&gt;And just like that, you can now let everyone know that you use the cloud as your own personal computer!&lt;/p&gt;

&lt;p&gt;For some bonus content, did you know you can test your cloud functions locally? You can use the &lt;a href="https://github.com/GoogleCloudPlatform/functions-framework-python"&gt;Python Functions Framework&lt;/a&gt; to run your Cloud Function on your machine and experiment before deploying!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install functions_framework
functions-framework --target mark_read
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a more in depth look at this you can read more in &lt;a href="https://dev.to/googlecloud/portable-cloud-functions-with-the-python-functions-framework-a6a"&gt;this post&lt;/a&gt;by &lt;a href="https://twitter.com/di_codes"&gt;Dustin Ingram&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://chriswilcox.dev/blog/2021/02/18/Cron-ing-in-the-cloud.html"&gt;https://chriswilcox.dev&lt;/a&gt; on 18 February 2021&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How I tamed my GitHub notifications</title>
      <dc:creator>Christopher Wilcox</dc:creator>
      <pubDate>Wed, 10 Feb 2021 15:00:00 +0000</pubDate>
      <link>https://dev.to/crwilcox/how-i-tamed-my-github-notifications-chk</link>
      <guid>https://dev.to/crwilcox/how-i-tamed-my-github-notifications-chk</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZQ1jhJj0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-10-tamed-github-notifications/more-to-less-notifications.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZQ1jhJj0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-10-tamed-github-notifications/more-to-less-notifications.png" alt="Less notifications make for a happier developer on GitHub"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the unique things about working on Google Cloud Client Libraries is the sheer volume of repositories I interact with. The Cloud Developer Relations team manages and releases libraries across 7+ languages and 300+ repositories. This creates a firehose of notifications that can quickly get out of hand. In my more focused area, storage and database products, there are 30+ repositories to keep an eye on. While I expect most folks using GitHub don’t have this particular problem, you may still have the core issue I encounter: not all issues are of equal importance; some in fact aren’t important at all.&lt;/p&gt;

&lt;h1&gt;
  
  
  Digging out of the hole
&lt;/h1&gt;

&lt;p&gt;When you have this many stale notifications, you sort of have to throw the idea of reading them out the window. I find the obvious thing to do at this point is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Declare inbox bankruptcy&lt;/li&gt;
&lt;li&gt;Mark everything done.&lt;/li&gt;
&lt;li&gt;…&lt;/li&gt;
&lt;li&gt;All is well now.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While this may seem like a great idea, I find it amounts to ‘kicking the can down the road’ more than it is solving the underlying problem; you will be right back to this situation in the near future. We have to find a way to stop the issues from being a noisy signal and turn it into an actionable workstream.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Stop the growth and move to an opt in model for notifications
&lt;/h2&gt;

&lt;p&gt;By default GitHub ‘watches’ all of the repositories you are added to. As part of my work, I am added to many repositories that I interact with infrequently; for instance, some of these repositories contain core componentry, or helper packages, that are used across a broad set of libraries. By unchecking this box, it will stop my notifications from growing every time a new repository of this sort is added.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bVaFZ73t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-10-tamed-github-notifications/automatic-watching.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bVaFZ73t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-10-tamed-github-notifications/automatic-watching.png" alt="Don't Automatically Watch Notifications"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Identify the noise
&lt;/h2&gt;

&lt;p&gt;With Step 1 complete, I’d expect to see less growth of notifications moving forward. So the next part is figuring out where the existing notifications are coming from. What repositories currently have notifications to review?&lt;/p&gt;

&lt;h3&gt;
  
  
  Github3.py
&lt;/h3&gt;

&lt;p&gt;While the GitHub UI is great for a lot of things, there are some tasks where using the API is just easier. This is one of those times. Luckily most languages have a library already for interacting with GitHub, in python I like to use &lt;a href="https://github3py.readthedocs.io/en/master/"&gt;github3.py&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pip install github3.py

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, &lt;code&gt;github3.py&lt;/code&gt; requires an API key which you can configure on &lt;a href="https://github.com/settings/tokens"&gt;GitHub&lt;/a&gt;. Also, I like to not store my access token in plain text in my files, so I use keyring to manage that for me.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from collections import Counter
import getpass
from github3 import login
import keyring

access_token = keyring.get_password('github', 'notifications')
if access_token == None:
    access_token = getpass.getpass("Please enter your GitHub access token (will be saved to keyring)")
    keyring.set_password('github', 'notifications', access_token)

gh = login(token=access_token)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once logged in, notifications can be retrieved from GitHub and the source repo can be extracted to a set we can print to the console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;org_repos = []
for notification in gh.notifications():
    url_segments = notification.subject['url'].split('/')
    repo = url_segments[-3]
    org = url_segments[-4]
    org_repos.append('/'.join((org, repo)))

unique_repos = Counter(org_repos)
print(f"Found {len(unique_repos)} repositories")

for repo in unique_repos:
   print(f"{repo}: {unique_repos[repo]}")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which will return something that looks like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Found 212 repositories
googleapis/python-firestore: 11
googleapis/python-storage: 22
googleapis/python-crc32c: 3
...

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, if you don’t care about a count breakdown of your notifications, you can always view &lt;a href="https://github.com/watching"&gt;https://github.com/watching&lt;/a&gt; to get a breakdown of the repositories you watch. Note this won’t show things you are notified for due to a team so the list may be incomplete.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Tease out what is really important
&lt;/h2&gt;

&lt;p&gt;Now that we have a list of repositories we are getting alerts from, we can start to separate out the types of projects we are involved in and which ones are making it hard to find the important notifications. You can start to see if there are repositories that are particularly noisy and of low-signal for you personally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filter out high-volume, low signal notifications
&lt;/h3&gt;

&lt;p&gt;When I think about notifications I think they fall into a few groups:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;things you never need to be aware of.&lt;/li&gt;
&lt;li&gt;things you need to sometimes be aware of or involved in.&lt;/li&gt;
&lt;li&gt;things you need to be aware of&lt;/li&gt;
&lt;li&gt;things you need to be involved in&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Unwatch repositories where you are unlikely to take an action from the notification.
&lt;/h3&gt;

&lt;p&gt;Let’s start with the things you never need to be involved in. These may be projects that are far broader than your day to day work. They may also be projects you contributed to in the past but have since handed off to someone else. These notifications are making it hard to see other notifications that are more important. If you go to the repository page, move notifications to ignore.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tWO3zTm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-10-tamed-github-notifications/unwatch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tWO3zTm5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-10-tamed-github-notifications/unwatch.png" alt="Unwatch Repositories"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also treat the notifications I sometimes need to be aware of the same way, provided those projects have an owner. I find that I am more likely to get reached out to by another maintainer over email/chat for these, and that the notifications aren’t the useful stream of information. By ignoring these as well it helps me manage the information flow for the things I need to be aware of.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing notifications for repositories you need to be aware of and involved in
&lt;/h2&gt;

&lt;p&gt;With any luck, by the time you get here you are &lt;a href="https://github.com/watching"&gt;watching&lt;/a&gt; a more reasonable number of repositories. For myself that seems to be in the 20-40 range, but that number may differ for you personally.&lt;/p&gt;

&lt;p&gt;I have never found managing the emails from GitHub to be the best approach for myself, so I try to use the separate &lt;a href="https://github.com/notifications"&gt;GitHub Notifications Page&lt;/a&gt;. This pages has some helpful groupings as well that can help with the flow of things.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E3ALKlu1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-10-tamed-github-notifications/less-issues.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E3ALKlu1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-10-tamed-github-notifications/less-issues.png" alt="Less Issues"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once tamed to be a reasonable number of notifications I found I could treat my notifications much the same way I treat email.&lt;/p&gt;

&lt;h1&gt;
  
  
  Keeping this going
&lt;/h1&gt;

&lt;p&gt;Keeping my notifications is still an active effort, but I find that having around 100x means I am far more likely to try than I previously was. I do have some additional tricks to handling this though.&lt;/p&gt;

&lt;p&gt;Even for the repositories I should have awareness of, there are events that are a bit less useful. For instance, I find that PR merges tend to not be interesting to me. Merges happen once I have, or a teammate, has reviewed and ok’d the change. I also find the same to generally be true of closed issues.&lt;/p&gt;

&lt;p&gt;That said, sometimes this rule doesn’t hold, so my workflow gives me a chance to catch these notifications before clearing them.&lt;/p&gt;

&lt;h3&gt;
  
  
  My morning routine
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Run an automated script that marks all merge and closed events as read notifications &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w2dZeXbO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2021-02-10-tamed-github-notifications/script-run.png" alt="Running Script"&gt;
&lt;/li&gt;
&lt;li&gt;Go to &lt;a href="https://github.com/notifications?query=is%3Aread+"&gt;GitHub Notifications, filtered to is:read’&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Review these events. While seldom the case, some of these are more interesting.&lt;/li&gt;
&lt;li&gt;Once reviewed, select all, and mark done.&lt;/li&gt;
&lt;li&gt;Now, go to the remaining notifications&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  mark_read.py
&lt;/h3&gt;

&lt;p&gt;This script will enumerate all closed and merged notifications to mark them as read. This helps to clear out some of the lower signal notifications before I review the rest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import getpass
from github3 import login
import keyring

access_token = keyring.get_password('github', 'notifications')
if access_token == None:
   access_token = getpass.getpass("Please enter your GitHub access token (will be saved to keyring)")
   keyring.set_password('github', 'notifications', access_token)

gh = login(token=access_token)

notifications_to_mark = []
for notification in gh.notifications():
   url_segments = notification.subject['url'].split('/')
   number = url_segments[-1]
   repo = url_segments[-3]
   org = url_segments[-4]

   try:
       if notification.subject['type'] == 'PullRequest':
           pr = gh.repository(org, repo).pull_request(int(number))
           if pr.state == 'closed':
               notifications_to_mark.append(notification)

       elif notification.subject['type'] == 'Issue':
           issue = gh.repository(org, repo).issue(int(number))
           if issue.state == 'closed':
               notifications_to_mark.append(notification)
   except Exception as e:
       print(e)

unique_repos = set([n.subject['url'][:-1] for n in notifications_to_mark])
print(f"Found {len(notifications_to_mark)} to mark read across {len(unique_repos)} repositories")

input("Press ENTER to mark items read.")

for n in notifications_to_mark:
   print(f"Marking {n.subject['url']}")
   n.mark()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While I am constantly evolving my approach to working across a variety of open source projects, I have found this system to work well for me and help to keep the value of notifications high and above the noise line.&lt;/p&gt;

&lt;p&gt;Have thoughts on my approach or want to share your approach with me? Feel free to reach out on &lt;a href="https://twitter.com/chriswilcox47"&gt;twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://chriswilcox.dev/blog/2021/02/10/How-I-tamed-my-GitHub-notifications.html"&gt;https://chriswilcox.dev&lt;/a&gt; on 10 February 2021&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Collecting temperature data using CircuitPython</title>
      <dc:creator>Christopher Wilcox</dc:creator>
      <pubDate>Tue, 17 Nov 2020 18:15:00 +0000</pubDate>
      <link>https://dev.to/crwilcox/collecting-temperature-data-using-circuitpython-3ejo</link>
      <guid>https://dev.to/crwilcox/collecting-temperature-data-using-circuitpython-3ejo</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oT4jETbL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2020-11-17-temperature-data-with-circuitpython/microcontrollers.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oT4jETbL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2020-11-17-temperature-data-with-circuitpython/microcontrollers.jpg" alt="I've got a lovely bunch of microcontrollers"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While I spend my work days thinking about how to make the cloud easier to use and more powerful, I started out pursuing a degree in hardware engineering. I spent most of my time at university writing VHDL, trying to wrangle magnets and electric fields, and contorting my mind into understanding Physics. That said, since University my hardware skills have atrophied a great deal; all I have left to prove any of this are some books no one would buy otherwise. So please, no questions about RC circuits.&lt;/p&gt;

&lt;p&gt;Even so, I still enjoy bridging the worlds. Lately that is usually via MicroPython or &lt;a href="https://circuitpython.org/"&gt;CircuitPython&lt;/a&gt; (depending on the hardware). It is approachable and a low-entry to solving whatever problem I may have at the time. I like to keep these boards out on my desk to taunt me: “Hey, have anything for us to do?”&lt;/p&gt;

&lt;p&gt;My latest experiment was around temperatures. While where I live, Seattle, has generally stable temperatures, it does get cooler in winter. My basement is conditioned, but I keep the heat vents closed since it is mostly storage and a laundry room. I also have a modest wine collection and wine prefers cooler temperatures. But is it cool enough? Is it too cold? How much does the temperature vary? Can I figure this out with bits I have on hand?&lt;/p&gt;

&lt;p&gt;Out of the boards I have around, the&lt;a href="https://learn.adafruit.com/adafruit-circuit-playground-express"&gt;Circuit Playground Express&lt;/a&gt;seemed like a good option, as it has a temperature sensor included.&lt;/p&gt;

&lt;p&gt;Circuit Playground Express makes it &lt;em&gt;really&lt;/em&gt; easy to read a temperature sensor.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import time
from adafruit_circuitplayground.express import cpx
while True:
    temperature = cpx.temperature * 1.8 + 32
    print("Temperature F: {}".format(temperature))
    time.sleep(1)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it really. On a continuous loop, &lt;code&gt;cpx.temperature&lt;/code&gt; is read, converted from celsius (because America), and then printed to the console. If you are connected to a serial monitor you can view the results from there. If you aren’t familiar with this I’d recommend looking into&lt;a href="https://codewith.mu/"&gt;Mu Code&lt;/a&gt; as the built in serial debug is easy to setup.&lt;/p&gt;

&lt;p&gt;So, proof of concept is there, I can use the board to get room temperature data. Though there are a few further things I would like to accomplish for my particular use case.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I am going to want to track temperature over a few days. So, once a second is way too often. Instead, I’ll get the updates every minute.&lt;/li&gt;
&lt;li&gt;It would be nice if the board communicated its state a bit. For instance if an error is encountered, or what temperature the sensor is reporting. So I want to configure the NeoPixel LEDs to display different patterns to indicate different states.&lt;/li&gt;
&lt;li&gt;I don’t want to have to leave the board plugged into a computer. I’d like to be able to leave it alone just powered, and collect the data later.&lt;/li&gt;
&lt;li&gt;While the temperature reported is reasonable, it is a bit higher than my thermostat. I expect the results will need to be adjusted a bit to reflect actual room temperature.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Reporting board status via LEDs
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TodroQHz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://chriswilcox.dev/images/2020-11-17-temperature-data-with-circuitpython/not_write_blink.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TodroQHz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://chriswilcox.dev/images/2020-11-17-temperature-data-with-circuitpython/not_write_blink.gif" alt="Blinky lights on Circuit Playground FTW"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the difficulties of using a microcontroller can be the lack of a console or display. Though, because the Circuit Playground Express has 10 NeoPixels (and a status LED) there are ways to communicate small bits of information. For this project, it would be good to know if something has went wrong (an exception) and what the temperature being read is.&lt;/p&gt;

&lt;p&gt;As this is used indoors, I don’t expect I would read a temperature under 50F or over 95F. Given that, for each 5F increment, an additional LED could be illuminated on the board.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Do_7snbl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2020-11-17-temperature-data-with-circuitpython/board_recording.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Do_7snbl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2020-11-17-temperature-data-with-circuitpython/board_recording.jpg" alt="Recording Temperature Data"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def color_temperature(temp):
    # Use Neopixels to show temperature,
    # incrementally lighting up another NeoPixel
    # add a light for each extra 5 degrees
    # 50 (0), 55 (1), 60 (2), 65 (4), 70 (5),
    # 75 (6), 80 (7), 85 (8), 90 (9), 95 (10)

    # turn all off first
    cpx.pixels.fill((0,0,0))

    temp = int(temp) - 50
    leds_on = min(temp / 5, 10)
    print(leds_on)
    for i in range(leds_on):
        print(i)
        cpx.pixels[i] = color.HEAT_COLORS[i]
    cpx.pixels.show()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring a Circuit Playground Express to allow storing data.
&lt;/h2&gt;

&lt;p&gt;In order to use the device without a serial monitor, it needs to be able to store a small amount of data back to the device. There is a complete guide of how to use storage with Circuit Playground available on&lt;a href="https://learn.adafruit.com/adafruit-circuit-playground-express/circuitpython-storage"&gt;Adafruit&lt;/a&gt;but at a high-level, the built-in toggle switch can be used to determine if writing is being done over USB (for development) or via the code in &lt;code&gt;code.py&lt;/code&gt;(for data capturing).&lt;/p&gt;

&lt;p&gt;A small bit of code should be written to &lt;code&gt;boot.py&lt;/code&gt; to initialize the storage state before executing the code included in &lt;code&gt;code.py&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Contents of boot.py
import board
import digitalio
import storage

switch = digitalio.DigitalInOut(board.D7)
switch.direction = digitalio.Direction.INPUT
switch.pull = digitalio.Pull.UP

# If the D7 is switched on (towards B button),
# CircuitPython can write to storage
storage.remount("/", switch.value)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once &lt;code&gt;boot.py&lt;/code&gt; is saved to the board, reset the Circuit Playground Express. Now &lt;code&gt;code.py&lt;/code&gt; can write to storage if cpx.switch is set to False. Otherwise, there is a small bit of code to flash the LEDs to make it clear the program is not storing data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# If Switch is to left, we aren't in write mode, flash yellow LEDs to warn
if cpx.switch:
    cpx.pixels.fill(color.YELLOW)
    cpx.pixels.show()
else:
    with open("/temperatures.txt", "a") as f:
        f.write("{}, {}\n".format(temperature, time_from_start))

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Putting all of the pieces together, the resulting &lt;code&gt;code.py&lt;/code&gt; file is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import time

from adafruit_circuitplayground.express import cpx

# Turn down LED brightness as 1 is *very* bright
cpx.pixels.brightness = 0.02

# Configure NeoPixel state to update on show()
cpx.pixels.auto_write = False

class color:
    RED = (255,0,0)
    YELLOW = (255,255,0)
    HEAT_COLORS = [
        (255, 255, 0),
        (255, 255, 0), # YELLOW
        (255, 150, 0),
        (255, 150, 0), # YELLOWORANGE)
        (255, 100, 0),
        (255, 100, 0), # ORANGE
        (255, 50, 0),
        (255, 50, 0), # REDORANGE
        (255, 0, 0),
        (255, 0, 0), # RED
    ]

def color_temperature(temp):
    # Use Neopixels to show temperature,
    # incrementally lighting up another NeoPixel
    # add a light for each extra 5 degrees
    # 50 (0), 55 (1), 60 (2), 65 (4), 70 (5),
    # 75 (6), 80 (7), 85 (8), 90 (9), 95 (10)

    # turn all off first
    cpx.pixels.fill((0,0,0))

    temp = int(temp) - 50
    leds_on = min(temp / 5, 10)
    print(leds_on)
    for i in range(leds_on):
        print(i)
        cpx.pixels[i] = color.HEAT_COLORS[i]
    cpx.pixels.show()

# time.monotonic() allows for non-blocking LED animations!
time_from_start = 0
start = time.monotonic()
while True:
    now = time.monotonic()

    # Display a red led when switch is to the left.
    cpx.red_led = cpx.switch

    temperature = cpx.temperature * 1.8 + 32
    print("Temperature F: {} t: {}".format(temperature, time_from_start))

    # If Switch is to left, we aren't in write mode, flash yellow LEDs to warn
    if cpx.switch:
        cpx.pixels.fill(color.YELLOW)
        cpx.pixels.show()

        color_temperature(temperature)

        # Blink yellow lights every second.
        time_from_start += 1
        time.sleep(1)
    else:
        try:
            with open("/temperatures.txt", "a") as f:
                f.write("{}, {}\n".format(temperature, time_from_start))
            color_temperature(temperature)
            cpx.pixels.show()
        except OSError as ex:
            print("Cannot write while connected to pc: {}".format(ex))
            cpx.pixels.fill(color.RED)
            cpx.pixels.show()

        # Wait 60 seconds before recording the next temperature update.
        time_from_start += 60
        time.sleep(60)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the result of running the code above is a CSV of temperatures and time offsets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;74.5162, 0
73.3791, 60
72.8705, 120
72.4798, 180
72.3626, 240

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Calibrating temperatures
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NFJf6Ly8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2020-11-17-temperature-data-with-circuitpython/calibrating_temperature.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NFJf6Ly8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2020-11-17-temperature-data-with-circuitpython/calibrating_temperature.jpg" alt="Using a more reliable thermometer to calibrate to"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As stated earlier, the temperature sensor on the board is a bit adrift of the actual temperature. Fortunately I have a well-calibrated thermometer. While it isn’t guaranteed the deviation is linear, I am going to assume it is as the variation in temperature isn’t significant from preliminary looks at the data. I left a container of water in the space and measured it just before stopping data collection. At the time I captured this photo, the temperature sensor was reporting 71.97F and my calibrated thermometer showed 64.8F. The temperature on the board is about 7.2F off.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looking at the data
&lt;/h2&gt;

&lt;p&gt;Processing the collected data is straightforward enough, and can be done mostly with builtin Python libraries and matplotlib. As the data was stored as a CSV, I can use a CSV reader. There is a small amount of processing needed per row, shifting the temperature value after calibration and converting the time delta to a datetime, as that should be easier to reason about than seconds from start of collection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kQgBi6LE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2020-11-17-temperature-data-with-circuitpython/plotted_temperatures.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kQgBi6LE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/2020-11-17-temperature-data-with-circuitpython/plotted_temperatures.png" alt="Plot showing temperature variance over each day"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from datetime import datetime, timedelta
import csv

import matplotlib.pyplot as plt
import datetime

start_time = datetime.datetime(2020, 11, 12, 17, 32)

times = []
temperatures = []

with open("temperatures.txt") as f:
    reader = csv.reader(f, delimiter=',')
    for temperature, time_offset in reader:
        # Reported temperature of 71.9724 recorded as 64.8F
        # Offset temperatures by 7.2F
        temperature_adjusted = float(temperature) - 7.2
        temperatures.append(float(temperature_adjusted))
        t = start_time + timedelta(seconds=int(time_offset))
        times.append(t)

plt.plot(times, temperatures)
plt.ylabel('Temperature')
plt.xlabel('Time')
plt.grid(True)

plt.savefig("temperatures.png")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;So, what did I find out? While my basement is cooler than my main floor, it is a ways off from a wine cellar. No risk of things freezing though. :) I must admit, there are lower tech solutions to problems like this. I could have just left my thermometer in this room for a while, periodically checking in on it, but that wouldn’t have given me an opportunity to explore writing to storage with CircuitPython and Circuit Playground Express. And sometimes, that is what is fun about microcontrollers like this. It’s why I leave them out on my desk; to tempt me into doing silly things.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://chriswilcox.dev/blog/2020/11/17/Collecting-temperature-data-with-CircuitPython.html"&gt;https://chriswilcox.dev&lt;/a&gt; on 17 November 2020&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>circuitpython</category>
      <category>microcontrollers</category>
      <category>programming</category>
    </item>
    <item>
      <title>Diving into GitHub with BigQuery and Python</title>
      <dc:creator>Christopher Wilcox</dc:creator>
      <pubDate>Wed, 02 Sep 2020 17:00:00 +0000</pubDate>
      <link>https://dev.to/googlecloud/diving-into-github-with-bigquery-and-python-3gfn</link>
      <guid>https://dev.to/googlecloud/diving-into-github-with-bigquery-and-python-3gfn</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I9zEcstl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/cloud_github_python_adds_to.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I9zEcstl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://chriswilcox.dev/images/cloud_github_python_adds_to.png" alt="Google Cloud + GitHub + Python = ?"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most of the workplaces I have worked at, Google included, have some kind of performance evaluation system, usually yearly or bi-yearly. And without fail every one of these systems likes numbers. Because a lot of my work is done in the open, I have taken to diving into my GitHub usage to try and gain insights to what I am up to when evaluation time comes around. While the number of commits or lines of code you have committed don’t translate directly to the impact of the work, they do often help to refresh my memory on where I spent time, the work I have done, and the sort of work I focus on to provide value.&lt;/p&gt;

&lt;p&gt;While GitHub has an API you can use directly and there are datasets you can download, such as &lt;a href="https://www.gharchive.org/"&gt;GH Archive&lt;/a&gt;, the data is also &lt;a href="https://console.cloud.google.com/marketplace/details/github/github-repos?filter=solution-type:dataset"&gt;available&lt;/a&gt; on &lt;a href="https://cloud.google.com/bigquery"&gt;Google BigQuery&lt;/a&gt; along with &lt;a href="https://console.cloud.google.com/marketplace/browse?filter=solution-type:dataset"&gt;other public datasets&lt;/a&gt;. This is useful since most live APIs, like GitHub’s, will have rate limiting or quotas, may not be well indexed at the time of query, and may be awkward to do relational querying.&lt;/p&gt;

&lt;p&gt;I like to use the &lt;a href="https://pypi.org/project/google-cloud-bigquery/"&gt;BigQuery Python libraries&lt;/a&gt; to access BigQuery. You can use the online console as well, but I find it helpful to be able to script over the results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install google-cloud-bigquery

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once the library is installed interacting with BigQuery and making requests is familiar to most Pythonistas. For this work I find I usually use &lt;a href="https://colab.research.google.com/"&gt;colab&lt;/a&gt;, or a local &lt;a href="https://jupyter.org/"&gt;Jupyter&lt;/a&gt; Notebook. I find that I can query and then easily dive in, filtering the data locally, and discover more in the data.&lt;/p&gt;

&lt;p&gt;If you’d like you can follow along with the &lt;a href="///downloadable-content/diving_into_your_stats_with_GitHub_and_BigQuery.ipynb"&gt;Jupyter Notebook I used to create this post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is worth noting that a Google Cloud project is needed to connect to BigQuery. That said, there are helpful &lt;a href="https://cloud.google.com/bigquery/docs/quickstarts/quickstart-client-libraries"&gt;quickstarts&lt;/a&gt; available to help accelerate onboarding. Also, BigQuery is included in the &lt;a href="https://cloud.google.com/free"&gt;Google Cloud free-tier&lt;/a&gt;, however many queries are large in size and can exhaust the allowance. As of authoring this, 1TB of queries per month are free of charge. Querying around a month of data from the GitHub dataset is ~225 GB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Follow these instructions to create a service account key:
# https://cloud.google.com/bigquery/docs/quickstarts/quickstart-client-libraries
# Then, replace the string below with the path to your service account key

export GOOGLE_APPLICATION_CREDENTIALS='/path/to/your/service-account-key.json'

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring some variables and importing BigQuery
&lt;/h2&gt;

&lt;p&gt;The first thing to do is set a few variables for the rest of the scripts. The BigQuery APIs need to know my &lt;code&gt;GOOGLE_CLOUD_PROJECT_ID&lt;/code&gt; and the GitHub dataset queries will need to know the target user and the range of dates to look at.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GOOGLE_CLOUD_PROJECT_ID = "google-cloud-project-id"
GITHUB_USERNAME = 'crwilcox'
START_DATE = "2020-08-01"
END_DATE = "2020-09-01"

START_DATE_SUFFIX = START_DATE[2:].replace('-', '')
END_DATE_SUFFIX = END_DATE[2:].replace('-', '')

from google.cloud import bigquery

client = bigquery.client.Client(project=GOOGLE_CLOUD_PROJECT_ID)

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Digging into the dataset
&lt;/h2&gt;

&lt;p&gt;As a starting point, let’s look at all the data for a particular user. To start, let’s take a look at events by type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Gather all events taken by a particular GitHub user
query = f"""SELECT type, event AS status, COUNT(*) AS count
FROM (
  SELECT type, repo.name as repository, actor.login,
         JSON_EXTRACT(payload, '$.action') AS event, payload, created_at
  FROM `githubarchive.day.20*`
  WHERE actor.login = '{GITHUB_USERNAME}' AND
        created_at BETWEEN TIMESTAMP('{START_DATE}') AND
        TIMESTAMP('{END_DATE}') AND
        _TABLE_SUFFIX BETWEEN '{START_DATE_SUFFIX}' AND
        '{END_DATE_SUFFIX}'
)
GROUP BY type, status ORDER BY type, status;
"""

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  An aside on cost and estimating query size
&lt;/h2&gt;

&lt;p&gt;While 1 TB of querying is included in the free tier, many datasets in BigQuery are large, and it can be easy to exhaust that. There is a way in the library to test-run first to estimate the size of the query. It is wise to dry-run queries to consider the efficiency as well as the cost of execution. For instance, if I try to execute this query over the last 2.5 years, the query size is over 3 TB, whereas the last month is around 223 GB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Estimating the bytes processed by the previous query.
job_config = bigquery.QueryJobConfig(dry_run=True, use_query_cache=False)
query_job = client.query(
    query,
    job_config=job_config,
)

gb_processed = query_job.total_bytes_processed / (1024*1024*1024)
print("This query will process {} GB.".format(gb_processed))


This query will process 222.74764717370272 GB.

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Running a query and retrieving a dataframe
&lt;/h2&gt;

&lt;p&gt;Now that the size of this query has been assessed, it can be executed and a &lt;a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html"&gt;Pandas dataframe&lt;/a&gt; can be used to explore the results.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query_job = client.query(query)
result = query_job.result()
result.to_dataframe()

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.dataframe tbody tr th:only-of-type {
    vertical-align: middle;
}

.dataframe tbody tr th {
    vertical-align: top;
}

.dataframe thead th {
    text-align: right;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;type&lt;/th&gt;
&lt;th&gt;status&lt;/th&gt;
&lt;th&gt;count&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;CreateEvent&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;605&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;DeleteEvent&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;255&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;ForkEvent&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;34&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;GollumEvent&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;IssueCommentEvent&lt;/td&gt;
&lt;td&gt;"created"&lt;/td&gt;
&lt;td&gt;678&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;IssuesEvent&lt;/td&gt;
&lt;td&gt;"closed"&lt;/td&gt;
&lt;td&gt;95&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;IssuesEvent&lt;/td&gt;
&lt;td&gt;"opened"&lt;/td&gt;
&lt;td&gt;174&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;IssuesEvent&lt;/td&gt;
&lt;td&gt;"reopened"&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;MemberEvent&lt;/td&gt;
&lt;td&gt;"added"&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;PublicEvent&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;PullRequestEvent&lt;/td&gt;
&lt;td&gt;"closed"&lt;/td&gt;
&lt;td&gt;678&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;PullRequestEvent&lt;/td&gt;
&lt;td&gt;"opened"&lt;/td&gt;
&lt;td&gt;443&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;PullRequestEvent&lt;/td&gt;
&lt;td&gt;"reopened"&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;PullRequestReviewCommentEvent&lt;/td&gt;
&lt;td&gt;"created"&lt;/td&gt;
&lt;td&gt;582&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;2243&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;td&gt;ReleaseEvent&lt;/td&gt;
&lt;td&gt;"published"&lt;/td&gt;
&lt;td&gt;90&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;WatchEvent&lt;/td&gt;
&lt;td&gt;"started"&lt;/td&gt;
&lt;td&gt;61&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;There are &lt;a href="https://developer.github.com/webhooks/event-payloads/"&gt;20+ event types&lt;/a&gt; that can investigated further. Some of these events may be more interesting for a given use case. With the lens of performance reviews some events, for instance &lt;code&gt;WatchEvent&lt;/code&gt; or &lt;a href="https://developer.github.com/webhooks/event-payloads/#gollum"&gt;&lt;code&gt;GollumEvent&lt;/code&gt;&lt;/a&gt;, may be less interesting. However other events can be used to answer questions that may be more relevant, such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How many releases have been made?&lt;/li&gt;
&lt;li&gt;How many pull requests have been opened?&lt;/li&gt;
&lt;li&gt;How many issues have been created?&lt;/li&gt;
&lt;li&gt;How many issues have been closed?&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Digging into stats by repository
&lt;/h2&gt;

&lt;p&gt;When thinking about how I interact with GitHub I tend to think in terms of organization and repositories, in part due to the fact that I commit for work but also for side-projects.&lt;/p&gt;

&lt;p&gt;By making some small changes to the query, grouping by repository, some new statistics can be derived.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query = f"""
SELECT repository, type, event AS status, COUNT(*) AS count
FROM (
  SELECT type, repo.name as repository, actor.login,
         JSON_EXTRACT(payload, '$.action') AS event, payload, created_at
  FROM `githubarchive.day.20*`
  WHERE actor.login = '{GITHUB_USERNAME}' AND
        created_at BETWEEN TIMESTAMP('{START_DATE}') AND
        TIMESTAMP('{END_DATE}') AND
        _TABLE_SUFFIX BETWEEN '{START_DATE_SUFFIX}' AND
        '{END_DATE_SUFFIX}'
)
GROUP BY repository, type, status ORDER BY repository, type, status;
"""

query_job = client.query(query)
results = [i for i in query_job.result()]

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;While the query above could be made more precise, I find it easier to separate the data once in Python. Also, notice that Pandas isn’t be used here but instead the result is being enumerated and used as a Python object.&lt;/p&gt;

&lt;p&gt;From here higher-level information can be collected. For instance, how many releases have been published by the user, or how many pull requests have been created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Releases made
count = [int(row.count) for row in results
         if row.type == 'ReleaseEvent']
print(f"{sum(count)} Releases across {len(count)} repos")

# PRs Made
count = [int(row.count) for row in results
         if row.type == 'PullRequestEvent' and
         row.status == "\"opened\""]
print(f"{sum(count)} PRs opened across {len(count)} repos")

# PR Comments Left
count = [int(row.count) for row in results
         if row.type == 'PullRequestReviewCommentEvent']
print(f"{sum(count)} PR comments across {len(count)} repos")

# Issues Created
count = [int(row.count) for row in results
         if row.type == 'IssuesEvent' and
         row.status == "\"opened\""]
print(f"{sum(count)} issues opened across {len(count)} repos")

# Issues Closed
count = [int(row.count) for row in results
         if row.type == 'IssuesEvent' and
         row.status == "\"closed\""]
print(f"{sum(count)} issues closed across {len(count)} repos")

# Issue Comments
count = [int(row.count) for row in results
         if row.type == 'IssueCommentEvent']
print(f"{sum(count)} issue comments across {len(count)} repos")

# Push Events
count = [int(row.count) for row in results
         if row.type == 'PushEvent']
print(f"{sum(count)} pushes across {len(count)} repos")


0 Releases across 0 repos
3 PRs opened across 3 repos
61 PR comments across 8 repos
6 issues opened across 3 repos
2 issues closed across 1 repos
17 issue comments across 9 repos
78 pushes across 13 repos

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So there are a lot of different event types, each with payloads to look at further.&lt;/p&gt;

&lt;h2&gt;
  
  
  For something different…
&lt;/h2&gt;

&lt;p&gt;Of course, there are some less productive and more entertaining things we can search for. For instance, how many times have I committed a linting fix…&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# How often do I forget to run the tests before committing?
query = f"""
  SELECT type, repo.name as repository, JSON_EXTRACT(payload, '$.commits') as commits,
        actor.login, created_at
  FROM `githubarchive.day.20*`
  WHERE actor.login = '{GITHUB_USERNAME}' AND type = "PushEvent"
        AND created_at BETWEEN TIMESTAMP('{START_DATE}') AND
        TIMESTAMP('{END_DATE}') AND
        _TABLE_SUFFIX BETWEEN '{START_DATE_SUFFIX}' AND
        '{END_DATE_SUFFIX}'
"""

query_job = client.query(query)
result = query_job.result()
df = result.to_dataframe()
df

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.dataframe tbody tr th:only-of-type {
    vertical-align: middle;
}

.dataframe tbody tr th {
    vertical-align: top;
}

.dataframe thead th {
    text-align: right;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;type&lt;/th&gt;
&lt;th&gt;repository&lt;/th&gt;
&lt;th&gt;commits&lt;/th&gt;
&lt;th&gt;login&lt;/th&gt;
&lt;th&gt;created_at&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;googleapis/python-firestore&lt;/td&gt;
&lt;td&gt;[{"sha":"91d6580e2903ab55798d66bc53541faa86ca7...&lt;/td&gt;
&lt;td&gt;crwilcox&lt;/td&gt;
&lt;td&gt;2020-08-13 16:53:15+00:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;googleapis/python-firestore&lt;/td&gt;
&lt;td&gt;[{"sha":"f3bedc1efae4430c6853581fafef06d613548...&lt;/td&gt;
&lt;td&gt;crwilcox&lt;/td&gt;
&lt;td&gt;2020-08-13 16:53:18+00:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;crwilcox/python-firestore&lt;/td&gt;
&lt;td&gt;[{"sha":"cdec8ec0411c184868a2980cdf0c94470c936...&lt;/td&gt;
&lt;td&gt;crwilcox&lt;/td&gt;
&lt;td&gt;2020-08-06 02:58:25+00:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;crwilcox/python-firestore&lt;/td&gt;
&lt;td&gt;[{"sha":"afff842a3356cbe5b0342be57341c12b2d601...&lt;/td&gt;
&lt;td&gt;crwilcox&lt;/td&gt;
&lt;td&gt;2020-08-06 05:55:58+00:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;crwilcox/python-firestore&lt;/td&gt;
&lt;td&gt;[{"sha":"c93a077d6d00bc6e3c5070a773add309b0439...&lt;/td&gt;
&lt;td&gt;crwilcox&lt;/td&gt;
&lt;td&gt;2020-08-06 05:57:55+00:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;73&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;crwilcox/python-firestore&lt;/td&gt;
&lt;td&gt;[{"sha":"b902bac30ad17bbc02d51d1b03494e089ca08...&lt;/td&gt;
&lt;td&gt;crwilcox&lt;/td&gt;
&lt;td&gt;2020-08-04 17:34:22+00:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;74&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;googleapis/google-auth-library-python&lt;/td&gt;
&lt;td&gt;[{"sha":"e963b33cee8c93994c640154d5b965c4e3ac8...&lt;/td&gt;
&lt;td&gt;crwilcox&lt;/td&gt;
&lt;td&gt;2020-08-07 21:10:53+00:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;75&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;GoogleCloudPlatform/python-docs-samples&lt;/td&gt;
&lt;td&gt;[{"sha":"86dbbb504f63149f7d393796b2530565c285e...&lt;/td&gt;
&lt;td&gt;crwilcox&lt;/td&gt;
&lt;td&gt;2020-08-12 17:28:08+00:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;76&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;googleapis/python-firestore&lt;/td&gt;
&lt;td&gt;[{"sha":"7122f24d0049ecad4e71cbac4bcb326eb8dd4...&lt;/td&gt;
&lt;td&gt;crwilcox&lt;/td&gt;
&lt;td&gt;2020-08-20 19:36:16+00:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;77&lt;/td&gt;
&lt;td&gt;PushEvent&lt;/td&gt;
&lt;td&gt;crwilcox/exposure-notifications-verification-s...&lt;/td&gt;
&lt;td&gt;[{"sha":"f087f323d0558436dc849aab80168abb11377...&lt;/td&gt;
&lt;td&gt;crwilcox&lt;/td&gt;
&lt;td&gt;2020-08-05 19:21:51+00:00&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;78 rows × 5 columns&lt;/p&gt;

&lt;p&gt;Looking at the first result the shape of the json data can be better understood. There is a message field that could be queried against.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df["commits"][0]


'[{"sha":"ce97f5e939bcdca1c9c46f472f41ead04ac6b2fe","author":{"name":"Chris Wilcox","email":"1a61e7a0041d068722f1c352424109b22f854ce0@google.com"},"message":"fix: lint","distinct":true,"url":"https://api.github.com/repos/crwilcox/python-firestore/commits/ce97f5e939bcdca1c9c46f472f41ead04ac6b2fe"}]'


len(df[df['commits'].str.contains("lint")])

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So, seems about 14% (11/78) of my commits last month. Seems someone could be a bit better about running the test suite &lt;em&gt;first&lt;/em&gt; 😅&lt;/p&gt;

&lt;h2&gt;
  
  
  Back to something slightly more productive
&lt;/h2&gt;

&lt;p&gt;For some of the projects I work on &lt;a href="https://www.conventionalcommits.org/en/v1.0.0/"&gt;Conventional Commits&lt;/a&gt; syntax is used. This can provide an idea of the type of work I am doing. For now I ignore non-conventional commits, combining them together.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
from collections import Counter

commit_types = Counter()

types = [
  "fix", "feat", "chore", "docs", "style",
  "refactor", "perf", "test", "revert"
]
for i in range(len(df)):
  commits = df.loc[i, "commits"]
  json_commits = json.loads(commits)
  for commit in json_commits:
    # If the first line contains a : assume the left side is the type.
    found_type = False
    for t in types:
      if commit['message'].startswith(t):
        commit_types[t] += 1
        found_type = True
        break
    else:
      commit_types["non-conventional"] += 1

commit_types


Counter({'chore': 21,
         'docs': 3,
         'feat': 14,
         'fix': 26,
         'non-conventional': 107,
         'refactor': 8,
         'test': 2})

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It seems that most of my commits are fixes, a decent number are chores, and next is feature implementation. Unfortunately a sizable number of commits for the period aren’t conventional commits, though it is likely safe to assume the trend is similar for those commits.&lt;/p&gt;

&lt;h2&gt;
  
  
  What uses can you discover for this data?
&lt;/h2&gt;

&lt;p&gt;I don’t think I can hope to scrape the surface on what you could use this data for, though I find it enlightening to see where I spend my time on GitHub and how my time in different repositories breaks down. Here are a few more thoughts on how you might use this dataset though.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get all GitHub issues for a repository.&lt;/li&gt;
&lt;li&gt;Get all GitHub issues created by you.&lt;/li&gt;
&lt;li&gt;Get all Issue comments.&lt;/li&gt;
&lt;li&gt;Get all PR comments.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Queries for these are below. Feel free to &lt;a href="https://twitter.com/chriswilcox47"&gt;reach out&lt;/a&gt; if you have thoughts on other useful queries you think should be included.&lt;/p&gt;

&lt;h4&gt;
  
  
  Get all GitHub issues for a repository.
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GITHUB_ORGANIZATION = 'googleapis' #@param {type:"string"}
GITHUB_REPOSITORY = 'python-%' #@param {type:"string"}

# Get all GitHub issues in a repository. In the example, a wildcard is used
# to get all issues 

query = f"""
  SELECT type, repo.name as repository, JSON_EXTRACT(payload, '$.pull_request.title') as title, actor.login, 
         JSON_EXTRACT(payload, '$.action') AS event, JSON_EXTRACT(payload, '$.pull_request.html_url') as url, created_at
  FROM `githubarchive.day.20*`
  WHERE repo.name LIKE '{GITHUB_ORGANIZATION}/{GITHUB_REPOSITORY}' AND type = "PullRequestEvent"
        AND created_at BETWEEN TIMESTAMP('{START_DATE}') AND
        TIMESTAMP('{END_DATE}') AND
        _TABLE_SUFFIX BETWEEN '{START_DATE_SUFFIX}' AND
        '{END_DATE_SUFFIX}'
"""

query_job = client.query(query)
result = query_job.result()
result.to_dataframe()

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Get all GitHub issues created by you.
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get all GitHub issues by this login
query = f"""
  SELECT type, repo.name as repository, JSON_EXTRACT(payload, '$.issue.title') as title, actor.login,
         JSON_EXTRACT(payload, '$.action') AS event, JSON_EXTRACT(payload, '$.issue.html_url') as url, created_at
  FROM `githubarchive.day.20*`
  WHERE actor.login = '{GITHUB_USERNAME}' AND type = "IssuesEvent"
        AND created_at BETWEEN TIMESTAMP('{START_DATE}') AND
        TIMESTAMP('{END_DATE}') AND
        _TABLE_SUFFIX BETWEEN '{START_DATE_SUFFIX}' AND
        '{END_DATE_SUFFIX}'
"""

query_job = client.query(query)
result = query_job.result()
result.to_dataframe()

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Get all Issue comments.
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get all issue comments
query = f"""
  SELECT type, repo.name as repository, actor.login,
       JSON_EXTRACT(payload, '$.action') AS event, JSON_EXTRACT(payload, '$.issue.html_url') as url, created_at
  FROM `githubarchive.day.20*`
  WHERE actor.login = '{GITHUB_USERNAME}' AND type = "IssueCommentEvent"
      AND created_at BETWEEN TIMESTAMP('{START_DATE}') AND
        TIMESTAMP('{END_DATE}') AND
        _TABLE_SUFFIX BETWEEN '{START_DATE_SUFFIX}' AND
        '{END_DATE_SUFFIX}'
"""
query_job = client.query(query)
result = query_job.result()
result.to_dataframe()

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Get all PR comments.
&lt;/h4&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get all PR comments created by this login
query = f"""
  SELECT type, repo.name as repository, actor.login,
         JSON_EXTRACT(payload, '$.action') AS event, JSON_EXTRACT(payload, '$.comment.html_url') as url, created_at
  FROM `githubarchive.day.20*`
  WHERE actor.login = '{GITHUB_USERNAME}' AND type = "PullRequestReviewCommentEvent"
        AND created_at BETWEEN TIMESTAMP('{START_DATE}') AND
        TIMESTAMP('{END_DATE}') AND
        _TABLE_SUFFIX BETWEEN '{START_DATE_SUFFIX}' AND
        '{END_DATE_SUFFIX}'
"""

query_job = client.query(query)
result = query_job.result()
result.to_dataframe()

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://chriswilcox.dev/blog/2020/09/02/Diving_into_GitHub_with_BigQuery_and_Python.html"&gt;https://chriswilcox.dev&lt;/a&gt; on 02 September 2020&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Look at the Open Source Exposure Notification Reference Server</title>
      <dc:creator>Christopher Wilcox</dc:creator>
      <pubDate>Wed, 10 Jun 2020 16:00:00 +0000</pubDate>
      <link>https://dev.to/googlecloud/a-look-at-the-open-source-exposure-notification-reference-server-2pbo</link>
      <guid>https://dev.to/googlecloud/a-look-at-the-open-source-exposure-notification-reference-server-2pbo</guid>
      <description>&lt;p&gt;In April, Google and Apple &lt;a href="https://blog.google/inside-google/company-announcements/apple-and-google-partner-covid-19-contact-tracing-technology/"&gt;announced&lt;/a&gt; a joint effort to create APIs that enable the use of Bluetooth Low Energy (BLE) technology to assist in reducing the spread of the virus that causes COVID-19. As part of this broader effort &lt;a href="https://www.google.com/covid19/exposurenotifications/"&gt;multiple resources&lt;/a&gt; have been made available to assist healthcare authorities to act swiftly. For instance, Google has released a reference Android application, additional terms of services for utilizing the exposure notifications API, specifications for cryptographic approach, and more. I have, &lt;a href="https://github.com/google/exposure-notifications-server/graphs/contributors"&gt;along with many others at Google&lt;/a&gt;, been creating an open source reference implementation of an Exposure Notifications server and wanted to elaborate on how this works.&lt;/p&gt;

&lt;p&gt;The reference server, which works with Android and iOS clients, shows implementers how to author and deploy a backend to pair with mobile applications leveraging the newly added BLE interface. The &lt;a href="https://github.com/google/exposure-notifications-server"&gt;reference server source code&lt;/a&gt; is available on GitHub alongside the existing &lt;a href="https://github.com/google/exposure-notifications-android"&gt;reference design for an Android app&lt;/a&gt;, both licensed under the Apache 2.0 license.&lt;/p&gt;

&lt;p&gt;The reference server implementation consists of multiple components which together can be used to accept, validate, and store temporary exposure keys (TEKs) from verified mobile devices. It periodically generates and signs incremental files that will later be downloaded by clients to perform an on-device key matching algorithm which determines if two devices were in close proximity. The server components are stateless, so that they can scale independently based on demand. &lt;/p&gt;

&lt;p&gt;The repository also contains a &lt;a href="https://github.com/google/exposure-notifications-server/blob/master/docs/deploying.md"&gt;set of Terraform configurations&lt;/a&gt; for easier deployment. While we have been using Google Cloud services, the reference server is designed to be platform-agnostic by using Kubernetes natively or in conjunction with Anthos, so it can be deployed on any cloud provider or on-premises infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Overview of the Service
&lt;/h2&gt;

&lt;p&gt;Taking a closer look at the implementation, there are a few high-level components that make up the reference server. Each component is a Google Cloud Run container and data is stored in a PostgreSQL database hosted on Google Cloud SQL. To walk through the components, I will group things by user interaction, starting with the voluntary sharing of temporary exposure keys (TEKs).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V8TpZ5O4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://google.github.io/exposure-notifications-server/images/google_cloud_run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V8TpZ5O4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://google.github.io/exposure-notifications-server/images/google_cloud_run.png" alt="Exposure Notification Server Diagram"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Architecture Diagram of the Exposure Notification Server&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Temporary Exposure Key Acceptance
&lt;/h3&gt;

&lt;p&gt;The primary job of the Exposure Notification server is to accept the TEKs of positively diagnosed users from mobile devices, validating those keys originating from the mobile app via device attestation APIs, and storing those keys in the database. When a user of the mobile app is tested and informed they have a positive diagnosis, they can optionally share their exposure keys via the app. When this is done, the server accepts the user’s TEKs and stores them for a short time so that other devices can download them and confirm if they have interacted with these keys. &lt;/p&gt;

&lt;h3&gt;
  
  
  Generating Batches of Key for Download by Mobile Device
&lt;/h3&gt;

&lt;p&gt;The number of users downloading TEKs will exceed that of those uploading keys. While not every user will need to upload TEKs, every user of the app will need to receive the TEKs that have been voluntarily uploaded. As every user will download the complete set of TEKs, we can further optimize this flow to better scale.&lt;/p&gt;

&lt;p&gt;Rather than frequently querying the database, periodically the server generates incremental files for download by client devices for performing the key matching algorithm that is run on the mobile device. The incremental files must be digitally signed with a private key so they can be verified to be from the server. The corresponding public key is pushed to a mobile device separately to be used for this verification.&lt;/p&gt;

&lt;p&gt;The reference design uses a CDN for public key distribution, which is backed by blob storage to scale better. Placing downloads behind a CDN and not accessing via a database query greatly reduces the load on the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cleanup of Older Temporary Exposure Keys and Batches
&lt;/h3&gt;

&lt;p&gt;The final necessary function of the reference server is to clean up stale data. As the system exists to help inform users of possible exposure to the virus that causes COVID-19, it isn’t necessary to maintain this data for a long time. After 14 days the keys and batches are cleaned up. This serves a few purposes.&lt;/p&gt;

&lt;p&gt;It ensures that, even though the keys held are not personally identifiable, the server persists the minimum required amount of information for the purposes of this service.&lt;/p&gt;

&lt;p&gt;It helps to control the overall size of the data. This should assist in keeping query performance and the required storage fairly consistent over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Components and Enhancements
&lt;/h3&gt;

&lt;p&gt;Our goal with the reference server was to provide a starting point for health authorities to build from. While the reference server is, itself, a complete implementation, care was taken to make it easy to extend. In fact, there already exist additional providers for many components. For instance, if you are unable to use Google Cloud services, such as Google Cloud Storage, &lt;a href="https://github.com/google/exposure-notifications-server/blob/master/internal/storage/filesystem_storage.go"&gt;additional implementations of a blob storage interface are provided&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Verifying a Positive Diagnosis
&lt;/h4&gt;

&lt;p&gt;The current exposure notification server publish protocol doesn’t authenticate requests. To ensure the request came from an individual that has been exposed, a verification server should be used to certify that the diagnosis is from a public health authority in the jurisdiction. While we haven’t published a reference for the verification test, a design and protocol can be found in the &lt;a href="https://github.com/google/exposure-notifications-server/blob/master/docs/design/verification_protocol.md"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Deploying Secret Management
&lt;/h4&gt;

&lt;p&gt;While the TEKs are anonymised, secrets are still required to operate an exposure service, such as to control database access, provide authorization credentials, and manage private keys for signing download content.&lt;/p&gt;

&lt;p&gt;This isn’t required but is strongly recommended. By default we leverage Google Cloud Secret Manager. The reference server includes implementations for additional &lt;a href="https://github.com/google/exposure-notifications-server/tree/master/internal/secrets"&gt;secret management systems&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Federation
&lt;/h4&gt;

&lt;p&gt;One of the additional components we have invested time into is the concept of federation. It is quite possible that neighboring healthcare authorities may wish to share the TEKs they collect from their users. &lt;/p&gt;

&lt;p&gt;For example, imagine a set of neighboring states, provinces, or countries where travel between them is common. By sharing TEKs, it provides users a better understanding of their interactions. While this isn’t necessary in an environment where people do not cross jurisdictions, it is expected that, at least to some extent, it is unavoidable for many to not travel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Coming Together as a Global Community
&lt;/h2&gt;

&lt;p&gt;There has never been a more important time to come together as a global community. That's why we're making this privacy-preserving server implementation available to health authorities, governments, auditors, and researchers. In publishing this open source code, our goal is to enable developers to leverage this reference implementation to get started quickly and slow the spread of COVID-19. Work is still ongoing and additional functionality is being added. For the most up to date information you can &lt;a href="https://github.com/google/exposure-notifications-server/"&gt;follow the project&lt;/a&gt; and read the &lt;a href="https://google.github.io/exposure-notifications-server/"&gt;reference documentation&lt;/a&gt; as this effort continues to develop. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://chriswilcox.dev"&gt;https://chriswilcox.dev&lt;/a&gt; on 10 June 2020.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
