<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Faethm AI</title>
    <description>The latest articles on DEV Community by Faethm AI (@faethm).</description>
    <link>https://dev.to/faethm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/faethm"/>
    <language>en</language>
    <item>
      <title>Improving Jupyter notebook code reviews with jupydiff</title>
      <dc:creator>Mikhail Thornhill</dc:creator>
      <pubDate>Wed, 14 Oct 2020 06:15:49 +0000</pubDate>
      <link>https://dev.to/faethm/improving-jupyter-notebook-code-reviews-with-jupydiff-2gce</link>
      <guid>https://dev.to/faethm/improving-jupyter-notebook-code-reviews-with-jupydiff-2gce</guid>
      <description>&lt;p&gt;As an information technology graduate looking to kickstart my career, I spent the first half of 2020 seeking opportunities to expand my skills and gain relevant practical experience.&lt;/p&gt;

&lt;p&gt;After a few twists and turns, I started my internship as an engineer at Faethm with one big goal:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How can we make code reviews easier for our data science team?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Faethm is an AI platform built on the work of data scientists, and for them this is a very important problem to solve. Today, Faethm’s data science team works primarily with Jupyter notebooks, and these are managed in various internal GitHub repositories.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with Jupyter notebooks
&lt;/h2&gt;

&lt;p&gt;Jupyter notebooks are a popular productivity tool among data scientists, and for good reason.&lt;/p&gt;

&lt;p&gt;You can execute data science workflows in cells, output tables and charts and keep documentation inline. Despite their popularity, it is cumbersome to manage changes made to Jupyter notebooks using version control systems like Git.&lt;/p&gt;

&lt;p&gt;Tools like &lt;a href="https://github.com/jupyter/nbdime" rel="noopener noreferrer"&gt;nbdime&lt;/a&gt; help a little. nbdime allows notebook users to highlight changes made between notebook versions on the command line or even within a Jupyter instance.&lt;/p&gt;

&lt;p&gt;However, GitHub's built-in source code tools are not designed for Jupyter notebooks. A notebook captures code, outputs and metadata as a JSON document. When a data scientist executes a notebook with modifications, the JSON data changes at the cell-level to reflect the new code, updates the metadata and captures the new output.&lt;/p&gt;

&lt;p&gt;Solving this problem became the essence of my internship.&lt;/p&gt;

&lt;p&gt;Since many of these technologies are new to me, I knew it would be a challenge. With a combination of self-learning, persistence and support from Faethm’s engineers I’m happy to be able to present this solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing jupydiff, a Docker action for GitHub Actions
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0bfpolt0f9xkuzn9u55b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F0bfpolt0f9xkuzn9u55b.png" alt="jupydiff on GitHub"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;jupydiff&lt;/strong&gt; is a GitHub Action that allows data scientists to quickly compare changes made to Jupyter notebooks in GitHub repositories.&lt;/p&gt;

&lt;p&gt;It works with regular commits and pull requests. When a change is made, jupydiff computes the code additions and deletions within each notebook, and summarises these as a comment on the associated commit or pull request.&lt;/p&gt;

&lt;p&gt;jupydiff helps you streamline data science code reviews.&lt;/p&gt;

&lt;p&gt;Without jupydiff, to compute the exact code difference between two Jupyter notebooks a reviewer would need to clone the repository, download and install nbdime and then run &lt;code&gt;nbdime diff&lt;/code&gt; on the command line. Alternatively, observing the regular diff in a code editor, version control tool or on GitHub itself involved interpreting lines of underlying JSON Jupyter notebook structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to version control Jupyter notebooks on GitHub with jupydiff
&lt;/h2&gt;

&lt;p&gt;Setting up jupydiff to work with your Jupyter notebook project is simple. Since jupydiff is a GitHub Action, it works with both public and private repositories.&lt;/p&gt;

&lt;p&gt;You can read all the details about configuring jupydiff in the &lt;a href="https://github.com/Faethm-ai/jupydiff" rel="noopener noreferrer"&gt;jupydiff repository on GitHub&lt;/a&gt;, but I’ll cover the essentials here.&lt;/p&gt;

&lt;p&gt;You’ll need to create a new GitHub Action workflow in your repository at &lt;code&gt;/.github/workflows/jupydiff.yml&lt;/code&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;jupydiff&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;pull_request&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;fetch-depth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Faethm-ai/jupydiff@v1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GitHub Actions runs jupydiff on your repository for each commit pushed or pull request opened. jupydiff computes the changes made with the latest commit, and leave a comment on the commit or pull request highlighting the differences in the code.&lt;/p&gt;

&lt;p&gt;With no more JSON mess, data scientists are free to continue with their code review right on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You’ve been reading a post from the Faethm AI engineering blog. We’re hiring, too! If share our passion for the future of work and want to pioneer world-leading data science and engineering projects, we’d love to hear from you. See our current openings: &lt;a href="https://faethm.ai/careers" rel="noopener noreferrer"&gt;https://faethm.ai/careers&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>docker</category>
      <category>devops</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Scaling Jupyter notebooks across the world with AWS and Papermill</title>
      <dc:creator>Blair Hudson</dc:creator>
      <pubDate>Wed, 16 Sep 2020 02:40:06 +0000</pubDate>
      <link>https://dev.to/faethm/scaling-jupyter-notebooks-across-the-world-with-aws-and-papermill-41ic</link>
      <guid>https://dev.to/faethm/scaling-jupyter-notebooks-across-the-world-with-aws-and-papermill-41ic</guid>
      <description>&lt;p&gt;As a data scientist, one of the most exciting things to me about Faethm is that data science is at the heart of our products.&lt;/p&gt;

&lt;p&gt;As the head of our data engineering team, it's my responsibility to ensure our data science can scale to meet the needs of our rapidly growing and global customer base.&lt;/p&gt;

&lt;p&gt;In this article, I'm going to share some of the most interesting parts of our approach to scaling data science products, and a few of the unique challenges that we have to address.&lt;/p&gt;

&lt;h2&gt;
  
  
  Faethm is data science for the evolution of work
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxvc2a027v0g8qlh6jlry.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxvc2a027v0g8qlh6jlry.jpg" alt="Faethm's platform"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we delve into our approach, it's important to understand a few things about Faethm and what we do.&lt;/p&gt;

&lt;p&gt;Our customers depend on us to understand the future of work, and the impacts that technology and shifts in work patterns have on their most critical asset: their people.&lt;/p&gt;

&lt;p&gt;Our data science team is responsible for designing and building our occupation ontology, breaking down the concept of "work" into roles, tasks, skills and a myriad of dynamic analytical attributes to describe all of these at the most detailed level. Our analytics are derived from a growing suite of propriety machine learning models.&lt;/p&gt;

&lt;p&gt;Our platform ties it all together to help people leaders, strategy leaders and technology leaders make better decisions about their workforce, with a level of detail and speed to insight that is impossible without Faethm.&lt;/p&gt;

&lt;h2&gt;
  
  
  We use Python and Jupyter notebooks for data science
&lt;/h2&gt;

&lt;p&gt;Our data scientists primarily use Python, Jupyter notebooks and the ever-growing range of Python packages for data transformation, analysis and modelling that you would expect to see in any data scientist's toolkit (and perhaps some you wouldn't).&lt;/p&gt;

&lt;p&gt;Luckily running an interactive Jupyter workbench in the cloud is pretty easy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5j77othm8pvenxwkyllb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5j77othm8pvenxwkyllb.jpg" alt="SageMaker architecture components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS SageMaker provides the notebook platform for our teams to configure managed compute instances to their requirements and turn them on and off on-demand. Self-service access to variably powerful modelling environments requires managing a few IAM Role policies and some clicks in the AWS Console.&lt;/p&gt;

&lt;p&gt;This means a data scientist can SSO into the AWS Console and get started on their next project with access to whatever S3 data is permitted by their access profile. Results written back to S3, notebooks pushed to the appropriate Git repository.&lt;/p&gt;

&lt;p&gt;How do we turn this into a product so that our data scientists don't ever have to think about running a operational workflow?&lt;/p&gt;

&lt;h2&gt;
  
  
  Engineering data science without re-engineering notebooks
&lt;/h2&gt;

&lt;p&gt;One of the core design goals of our approach is to scale without re-engineering data science workflows wherever possible.&lt;/p&gt;

&lt;p&gt;Due to the complexity of our models, it's critical that data scientists have full transparency of how their models are functioning in production. So we don't re-write Jupyter notebooks. We don't even replicate the code within into executable Python scripts. We just execute them, exactly as written, no change required.&lt;/p&gt;

&lt;p&gt;We do this with Papermill.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpispi9a3ged8i4ihral6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fpispi9a3ged8i4ihral6.jpg" alt="Papermill workflow"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Papermill is a Python package for parameterising and executing Jupyter notebooks. As long as a notebook is written with parameters for dynamic functionality (usually with sensible defaults in the first notebook cell), Papermill can execute the notebook (&lt;code&gt;$NOTEBOOK&lt;/code&gt;) on the command line with a single command. Any parameters (&lt;code&gt;-r&lt;/code&gt; raw or &lt;code&gt;-p&lt;/code&gt; normal) can be overridden at runtime and Papermill does this by injecting a new notebook cell assigning the new parameter values.&lt;/p&gt;

&lt;p&gt;A simple Papermill command line operation looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;papermill
papermill &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$NOTEBOOK&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$OUTPUT_NOTEBOOK&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-r&lt;/span&gt; A_RAW_PARAMETER &lt;span class="s2"&gt;"this is always a Python string"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-p&lt;/span&gt; A_PARAMETER &lt;span class="s2"&gt;"True"&lt;/span&gt; &lt;span class="c"&gt;# this is converted to a Python data type&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since Papermill executes the notebook and not just the code, the cell outputs including print statements, error messages, tables and plots are all rendered in the resulting output notebook (&lt;code&gt;$OUTPUT_NOTEBOOK&lt;/code&gt;). This means that the notebook itself becomes a rich log of exactly what was executed, and serves as a friendly diagnostic tool for data scientists to assess model performance and detect any process anomalies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reproducible notebook workflows
&lt;/h2&gt;

&lt;p&gt;Papermill is great for executing our notebooks, but we need notebooks to be executed outside of the SageMaker instance they were created in. We can achieve this by capturing a few extra artifacts alongside our notebooks.&lt;/p&gt;

&lt;p&gt;Firstly, we store a list of package dependencies in a project's Git repository. This is generated easily in the Jupyter terminal with &lt;code&gt;pip freeze &amp;gt; requirements.txt&lt;/code&gt;, but often best hand-crafted to keep dependencies to essentials.&lt;/p&gt;

&lt;p&gt;Any other dependencies are also stored in the repository. These can include scripts, pickled objects (such as trained models) and common metadata.&lt;/p&gt;

&lt;p&gt;We also capture some metadata in a YAML configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;Notebooks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my-notebook.ipynb&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;my-second-notebook.ipynb&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file lists the notebooks in execution order, so a workflow can be composed of multiple independent notebooks to maintain readability.&lt;/p&gt;

&lt;p&gt;Finally, a simple &lt;code&gt;buildspec.yml&lt;/code&gt; configuration file is included that initiates the build process. This is the standard for AWS CodeBuild which we use as a build pipeline.&lt;/p&gt;

&lt;p&gt;Changes to notebooks, dependencies and other repository items are managed through a combination of production and non-production Git branches, just like any other software project. Pull Requests provide a process for code promotion between staging and production environments, and facilitate a manual code review and automate a series of merge checks to create confidence in code changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notebook containers built for production deployment
&lt;/h2&gt;

&lt;p&gt;To keep our data science team focused on creating data science workflows and not build pipelines, the container build and deployment process is abstracted from individual Jupyter projects.&lt;/p&gt;

&lt;p&gt;Webhooks are configured on each Git repository. Pushing to a branch in a notebook project triggers the build process. Staging and production branches are protected from bad commits by requiring a Pull Request for all changes.&lt;/p&gt;

&lt;p&gt;A standard &lt;code&gt;Dockerfile&lt;/code&gt; consumes the artifacts stored in the project repository at build-time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FROM python:3.7

RUN pip &lt;span class="nb"&gt;install &lt;/span&gt;papermill

&lt;span class="c"&gt;# package dependencies&lt;/span&gt;
COPY requirements.txt
RUN pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# notebook execution order from YAML config&lt;/span&gt;
ARG NOTEBOOKS
ENV &lt;span class="nv"&gt;NOTEBOOKS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NOTEBOOKS&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# prepare entrypoint script&lt;/span&gt;
COPY entrypoint.sh

&lt;span class="c"&gt;# catch-all for other dependencies in the repository&lt;/span&gt;
COPY &lt;span class="nb"&gt;.&lt;/span&gt;

&lt;span class="c"&gt;# these parameters will be injected at run-time&lt;/span&gt;
ENV &lt;span class="nv"&gt;PARAM1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
ENV &lt;span class="nv"&gt;PARAM2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;

CMD ./entrypoint.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The entrypoint is an iterative bash script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;NOTEBOOK &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NOTEBOOKS&lt;/span&gt;&lt;span class="p"&gt;//,/ &lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;do
    &lt;/span&gt;papermill &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$NOTEBOOK&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"s3://notebook-output-bucket/&lt;/span&gt;&lt;span class="nv"&gt;$NOTEBOOK&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-r&lt;/span&gt; PARAM1 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PARAM1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;-p&lt;/span&gt; PARAM2 &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PARAM2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This &lt;code&gt;entrypoint.sh&lt;/code&gt; script follows this configuration file to execute each of the notebooks at run-time, and stores the resulting notebook output in S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Furtby9k07opsvna55wj6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Furtby9k07opsvna55wj6.jpg" alt="Repository build components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS CodeBuild determines the target environment from the repository branch, builds the container and pushes it to AWS ECR so it is available to be deployed into our container infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless task execution for Jupyter notebooks
&lt;/h2&gt;

&lt;p&gt;With Faethm's customers spanning many different regions across the world, the data is subject to the data regulations of each customer's local jurisdiction. Our data science workflows need to be able to execute in the regions which our customers specify for their data to be stored. With our approach, data does not have to transfer between regions for processing.&lt;/p&gt;

&lt;p&gt;We operate cloud environments in a growing number of customer regions across the world, throughout the Asia Pacific, US and Europe. As Faethm continues to scale, we need to be able to support new regions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7o6klwueohq0ox1h6fd8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7o6klwueohq0ox1h6fd8.jpg" alt="Multi-region Fargate components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To run our Jupyter notebook containers, each supported region has a VPC with a ECS Fargate cluster configured to run notebook tasks on-demand.&lt;/p&gt;

&lt;p&gt;Each Jupyter project is associated with an ECS task definition, and an ECS task definition template is configured by the build pipeline and deployed through CloudFormation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event-driven Jupyter notebook tasks
&lt;/h2&gt;

&lt;p&gt;To simplify task execution, each notebook repository has a single event trigger. Typically, a notebook task will run in response to a data object landing in S3. An example is a CSV being uploaded from a user portal, upon which our analysis takes place.&lt;/p&gt;

&lt;p&gt;In the project repository, the YAML configuration file captures the S3 bucket and prefix that will trigger the ECS task definition when a CloudTrail log sent to EventBridge matches it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;span class="na"&gt;S3TriggerBucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;notebook-trigger-bucket&lt;/span&gt;
&lt;span class="na"&gt;S3TriggerKeyPrefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;path/to/data/&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9um1mgu5xusbfqldurip.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9um1mgu5xusbfqldurip.jpg" alt="EventBridge components"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The EventBridge rule template is configured by the build pipeline and deployed through CloudFormation, and this completes the basic requirements for automating Jupyter notebook execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting it all together
&lt;/h2&gt;

&lt;p&gt;In this article we've looked at a few of the challenges to scaling and automating data science workflows in a multi-region environment. We've also looked at how to address them within the Jupyter ecosystem and how we are implementing solutions that take advantage of various AWS serverless offerings.&lt;/p&gt;

&lt;p&gt;When you put all of these together, the result is our &lt;em&gt;end-to-end serverless git-ops containerised event-driven Jupyter-notebooks-as-code data science workflow execution pipeline&lt;/em&gt; architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnvj0lq2cvd0c5apytb3b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnvj0lq2cvd0c5apytb3b.jpg" alt="Notebook automation architecture"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We just call it &lt;code&gt;notebook-pipeline&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You’ve been reading a post from the Faethm AI engineering blog. We’re hiring, too! If share our passion for the future of work and want to pioneer world-leading data science and engineering projects, we’d love to hear from you. See our current openings: &lt;a href="https://faethm.ai/careers" rel="noopener noreferrer"&gt;https://faethm.ai/careers&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>datascience</category>
      <category>python</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
