<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Brian Burton</title>
    <description>The latest articles on DEV Community by Brian Burton (@brianburton).</description>
    <link>https://dev.to/brianburton</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/brianburton"/>
    <language>en</language>
    <item>
      <title>Headless Ecommerce: Real World Test</title>
      <dc:creator>Brian Burton</dc:creator>
      <pubDate>Tue, 12 Aug 2025 13:07:27 +0000</pubDate>
      <link>https://dev.to/brianburton/headless-ecommerce-theres-only-one-316m</link>
      <guid>https://dev.to/brianburton/headless-ecommerce-theres-only-one-316m</guid>
      <description>&lt;p&gt;We recently completed a project that had complex requirements that wouldn't allow us to use any off-the-shelf hosted ecommerce platforms.  We needed to be able to modify everything from the authentication system to simplifying the checkout process and nothing we found checked all of the boxes.&lt;/p&gt;

&lt;p&gt;My responsibility was in choosing the OSS platform to build off of and so I dug in and built rough versions of the features we needed in three popular ecommerce platforms: Saleor, Sylius and Medusa JS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Saleor (Python)
&lt;/h2&gt;

&lt;p&gt;I've been a Python developer for about 15 years and this platform had the steepest learning curve by far. The documentation is sparse requiring the need to dive into the codebase to fully understand the Saleor way of doing things.  It took over a week to get a semi-functional prototype almost working before we decided to abandon the platform due to its inflexibility.&lt;/p&gt;

&lt;p&gt;Saleor's primary benefits are that it has a &lt;em&gt;very&lt;/em&gt; active development schedule and out of the box it's designed to be scalable as it's essentially a monolithic GraphQL service with a job scheduler running beside it.  Getting it running in a serverless environment was fairly straightforward, once you understand the architecture (which I gleaned from a Github issue and not the documentation) and all of the requirements.&lt;/p&gt;

&lt;p&gt;That's about all of the pros. Saleor is moving away from its plugin architecture, where you could modify how the Core functions, in favor of a Shopify-style app architecture, where you create complete completely isolated apps that acts as intermediaries to provide the customizations you need. The App SDK is very limited, keeping you at an arm's length from its core functionality and preventing the level of customizability you'd expect from an open source project.  Here are two examples of its inflexibility and complexity:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Imagine you want to create a simple custom mutatations. To do that you're required to run a completely separate Apollo server to provide Apollo Federation to combine the multiple services into a single GraphQL API. This would make perfect sense if this was a hosted product, but not for an open source self-hosted platform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A more specific example of the absurdly high wall it's built around itself, it's currently (3.21.0) impossible to set a customer's password without the customer's intervention. You're required to force them through the "Forgot my Password" flow, but if you need to set or change a customer's password the only solution is to modify Core, which at that point you've essentially forked the project.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Opinionated frameworks and platforms are good, but this one feels almost like their roadmap is designed exclusively around their hosted product without consideration of their self-hosted userbase. One developer described it best: "Open sourced Shopify."&lt;/p&gt;

&lt;h2&gt;
  
  
  Sylius (PHP/Symfony)
&lt;/h2&gt;

&lt;p&gt;In the PHP/Symfony world, Sylius appears to be the most mature "headless" ecommerce platform in that ecosystem.  I put "headless" in quotes because it's often listed as a headless ecommerce platform in other lists, but in reality it comes with a fully functional storefront.&lt;/p&gt;

&lt;p&gt;We didn't get very far with this one for two reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Following the quickstart guide, they recommend using the Sylius-Standard project to create your development environment.  That process is completely broken as it required several restarts to get the database image working only to discover the configured PHP version is incompatible and package.json is missing a library.  We ended up building the dev environment from scratch wasting two whole days.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the dev environment was running, it ran like a dog on a box with 64GB of RAM. Every single Ctrl+S triggered a 20+ second rebuild by a separate Docker container that exists only to rebuild the assets.  Page loads in the admin area took nearly 10 seconds to load. I have no idea why it ran so slowly and frankly I didn't want to lose the time to find out.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I love Symfony and the Sylius framework is written in an understandable and logical manner, but it is far too unpolished and difficult to develop on to consider for this project.  This platform was abandoned pretty quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Medusa JS (Typescript)
&lt;/h2&gt;

&lt;p&gt;Compared to the others, it was a breath of fresh air. It's clean, simple modular architecture made adding custom features a breeze without spending days digging through documentation and Github issues.&lt;/p&gt;

&lt;p&gt;You want to create a custom API route?  Just drop it into the /api directory.  &lt;em&gt;Boom&lt;/em&gt;, done.&lt;/p&gt;

&lt;p&gt;You want to change some core functionality?  Create a module.  &lt;em&gt;Boom&lt;/em&gt;, done.&lt;/p&gt;

&lt;p&gt;Want to distribute your module?  Make it a plugin.  &lt;em&gt;Boom&lt;/em&gt;, done.&lt;/p&gt;

&lt;p&gt;We created a fully functional prototype with all of the functionality we required in a day.  It's lightweight, has a logical code structure and didn't appear to be limited by artificial walls. &lt;em&gt;This&lt;/em&gt; is how you build an ecommerce framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I dislike publicly critizing open source projects as it's difficult to fully understand the choices behind their roadmaps and I don't want to discount the countless hours that have been invested to get the platforms to their current states. I truly appreciate the work they've put into creating these projects and hope that this is seen as honest feedback.&lt;/p&gt;

&lt;p&gt;However coming from a developer/PM perspective where I was required to review and choose which project was the easiest to develop on, most extensible and most mature, the race wasn't even close. Medusa will most definitely be at the top of the list for the next headless ecommerce project.&lt;/p&gt;

</description>
      <category>saleor</category>
      <category>medusa</category>
      <category>sylius</category>
      <category>ecommerce</category>
    </item>
    <item>
      <title>Saleor Core Hacking: Setting Up Your Development Environment</title>
      <dc:creator>Brian Burton</dc:creator>
      <pubDate>Thu, 03 Apr 2025 08:29:55 +0000</pubDate>
      <link>https://dev.to/brianburton/saleor-core-hacking-setting-up-your-development-environment-41io</link>
      <guid>https://dev.to/brianburton/saleor-core-hacking-setting-up-your-development-environment-41io</guid>
      <description>&lt;p&gt;Saleor is a headless e-commerce platform that's biggest deficit is its lack of documentation.&lt;/p&gt;

&lt;p&gt;If you want to modify Saleor Core directly to create PRs, or in my case, modify its core functionality directly, setting up the development environment is &lt;em&gt;not&lt;/em&gt; straightforward.  I'm writing this out for both my future reference and to help anyone else in the same situation.&lt;/p&gt;

&lt;p&gt;My environment is Windows, PyCharm and Docker Desktop.&lt;/p&gt;

&lt;p&gt;Note that if you're only interested in Saleor App development, &lt;strong&gt;these instructions are not for you.&lt;/strong&gt;  This is only for anyone needing to change core functionality or to submit PRs.&lt;/p&gt;

&lt;p&gt;Step 1: Clone the repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/saleor/saleor.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: Open the project in PyCharm and it should notify you of the presence of a devcontainer.  Allow it to set up the devcontainer.  If it doesn't prompt you, open the &lt;code&gt;.devcontainer/devcontainer.json&lt;/code&gt; file and click on the small box to create it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdr2159k4tx67dcckkex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdr2159k4tx67dcckkex.png" alt="Image description" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point the supporting containers will be running, but Core won't respond to any requests.&lt;/p&gt;

&lt;p&gt;Step 3: Open the Services panel (Alt+8) and drill down to the saleor container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r61gc563whxkep0fqyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3r61gc563whxkep0fqyr.png" alt="Image description" width="294" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4: Click on the &lt;code&gt;Terminal&lt;/code&gt; button on the right side of the Services panel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4auu4rv7yeoq692xdrnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4auu4rv7yeoq692xdrnl.png" alt="Image description" width="447" height="109"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 5: Run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install any missing dependencies
$ poetry install

# Copy the environment variables file
$ cp .env.example .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit the .env file add append these two lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ALLOWED_GRAPHQL_ORIGINS=*
ALLOWED_HOSTS=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Continue executing the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Update the database structure
$ python manage.py migrate

# Create a super user
$ python manage.py createsuperuser
Email youremail@example.com
Password: 
Password (again): 
Superuser created successfully.

# Populate the DB with example data, if you need to
$ python manage.py populatedb

# Run the server
$ uvicorn saleor.asgi:application --reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then access your dashboard at &lt;code&gt;http://localhost:9000&lt;/code&gt; and you should be able to log in.&lt;/p&gt;

</description>
      <category>saleor</category>
      <category>python</category>
      <category>howto</category>
    </item>
    <item>
      <title>Cloud Build, Docker and Artifact Registry: CI/CD Pipelines with Private Packages</title>
      <dc:creator>Brian Burton</dc:creator>
      <pubDate>Sat, 08 Jan 2022 17:01:13 +0000</pubDate>
      <link>https://dev.to/brianburton/cloud-build-docker-and-artifact-registry-cicd-pipelines-with-private-packages-5ci2</link>
      <guid>https://dev.to/brianburton/cloud-build-docker-and-artifact-registry-cicd-pipelines-with-private-packages-5ci2</guid>
      <description>&lt;p&gt;If you use Artifact Registry to store private Java/Node/Python packages and Cloud Build to compile your code before deployment, you'll quickly discover that the Docker container it creates can't access the Artifact Registry by default.&lt;/p&gt;

&lt;p&gt;This solution was discovered after a day of trial and error so I hope it saves you from the same forehead-to-desk frustration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step by Step
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Deploy your private packages to the Artifact Registry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a Service Account that only has access to read from the Artifact Registry. In my case I created &lt;code&gt;artifact-registry-reader@&amp;lt;PROJECT&amp;gt;.iam.gserviceaccount.com&lt;/code&gt; and gave it access to the Artifact Registry repository as an "Artifact Registry Reader."  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edit the newly created &lt;code&gt;artifact-registry-reader@&amp;lt;PROJECT&amp;gt;.iam.gserviceaccount.com&lt;/code&gt; Service Account and under permissions add your Cloud Builder Service Account (&lt;code&gt;&amp;lt;PROJECT_ID&amp;gt;@cloudbuild.gserviceaccount.com&lt;/code&gt;) as a Principal and grant it the "Service Account Token Creator" role. [Note, this works even if you use Cloud Build in a separate project.]&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, your cloudbuild.yaml file should look something like this:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:
  # Step 1: Generate an Access Token and save it
  #
  # Here we call `gcloud auth print-access-token` to impersonate the service account 
  # we created above and to output a short-lived access token to the default volume 
  # `/workspace/access_token`.  This is accessible in subsequent steps.
  #
  - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
    args:
      - '-c'
      - &amp;gt;
        gcloud auth print-access-token --impersonate-service-account
        artifact-registry-reader@&amp;lt;PROJECT&amp;gt;.iam.gserviceaccount.com &amp;gt;
        /workspace/access_token
    entrypoint: sh
  # Step 2: Build our Docker container
  #
  # We build the Docker container passing the access token we generated in Step 1 as 
  # the `--build-arg` `TOKEN`.  It's then accessible within the Dockerfile using
  # `ARG TOKEN`
  #
  - name: gcr.io/cloud-builders/docker
    args:
      - '-c'
      - &amp;gt;
        docker build -t us-docker.pkg.dev/&amp;lt;PROJECT&amp;gt;/services/frontend:latest
        --build-arg TOKEN=$(cat /workspace/access_token) -f
        ./docker/prod/Dockerfile . &amp;amp;&amp;amp;

        docker push us-docker.pkg.dev/&amp;lt;PROJECT&amp;gt;/services/frontend
    entrypoint: sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;FIVE. This next step is specific to private npm packages in the Artifact Registry, but my app has a partial .npmrc file with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@&amp;lt;NAMESPACE&amp;gt;:registry=https://us-npm.pkg.dev/&amp;lt;PROJECT&amp;gt;/npm/
//us-npm.pkg.dev/&amp;lt;PROJECT&amp;gt;/npm/:username=oauth2accesstoken
//us-npm.pkg.dev/&amp;lt;PROJECT&amp;gt;/npm/:email=artifact-registry-reader@&amp;lt;PROJECT&amp;gt;.iam.gserviceaccount.com
//us-npm.pkg.dev/&amp;lt;PROJECT&amp;gt;/npm/:always-auth=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;[Note: All that's missing is the &lt;code&gt;:_authToken&lt;/code&gt; line]&lt;/p&gt;

&lt;p&gt;SIX. Finally my Dockerfile uses the minted token to update my .npmrc file, giving it access to pull private npm packages from the Artifact Registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ARG NODE_IMAGE=node:17.2-alpine

FROM ${NODE_IMAGE} as base

ENV APP_PORT=8080

ENV WORKDIR=/usr/src/app
ENV NODE_ENV=production

FROM base AS builder

# Create our WORKDIR
RUN mkdir -p ${WORKDIR}

# Set the current working directory
WORKDIR ${WORKDIR}

# Copy the files we need
COPY --chown=node:node package.json ./
COPY --chown=node:node ts*.json ./
COPY --chown=node:node .npmrc ./
COPY --chown=node:node src ./src

#######################
# MAGIC HAPPENS HERE
# Append our access token to the .npmrc file and the container will now be 
# authorized to download packages from the Artifact Registry
# 
# IMPORTANT! Declare the TOKEN build arg so that it's accessible
#######################

ARG TOKEN
RUN echo "//us-npm.pkg.dev/&amp;lt;PROJECT&amp;gt;/npm/:_authToken=\"$TOKEN\"" &amp;gt;&amp;gt; .npmrc

RUN npm install

RUN npm run build

EXPOSE ${APP_PORT}/tcp

CMD ["cd", "${WORKDIR}"]
ENTRYPOINT ["npm", "run", "start"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is NPM specific but you can transfer these concepts to any other GCP resource to give your Docker build containers secure access with short-lived tokens to any resource in your project(s).&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>architecture</category>
      <category>docker</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Fully Isolating Data in a Multi-Tenant SaaS on Google Cloud using a Token Vending Machine</title>
      <dc:creator>Brian Burton</dc:creator>
      <pubDate>Fri, 26 Nov 2021 09:22:45 +0000</pubDate>
      <link>https://dev.to/brianburton/fully-isolating-resources-in-a-multi-tenant-saas-on-google-cloud-using-a-token-vending-machine-f25</link>
      <guid>https://dev.to/brianburton/fully-isolating-resources-in-a-multi-tenant-saas-on-google-cloud-using-a-token-vending-machine-f25</guid>
      <description>&lt;p&gt;If you're building a multi-tenant SaaS, securely isolating customer data not only from other customers but from your own developers is a conversation that you'll have sooner or later.  At a former startup, our customers' data was highly confidential and we went to extreme efforts to protect it both from inadvertent exposure caused by software bugs and internal access by employees unless absolutely necessary.&lt;/p&gt;

&lt;p&gt;We strived for a hybrid pool/silo architecture, my favorite security strategy to achieve this is one that AWS promotes known as the &lt;a href="https://aws.amazon.com/blogs/apn/isolating-saas-tenants-with-dynamically-generated-iam-policies/" rel="noopener noreferrer"&gt;Token Vending Machine&lt;/a&gt; that leverages IAM to isolate customer data.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhs9wc9xhaq72celc4qq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhs9wc9xhaq72celc4qq9.png" alt="Example of the Token Vending Machine" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Essentially an authorized user &lt;em&gt;(1)&lt;/em&gt; makes an API request through the API Gateway &lt;em&gt;(2)&lt;/em&gt;, which calls a custom authorizer to validate the credentials and generate a dynamic IAM policy &lt;em&gt;(3)&lt;/em&gt;. The dynamic IAM policy is passed to the handler function &lt;em&gt;(4)&lt;/em&gt; that locks all further processes into a specific set of resources &lt;em&gt;(5)&lt;/em&gt;.  The elegance of this solution is that it removes the burden of handling tenant security from the developers' hands and moves it down to the platform level. &lt;strong&gt;The threat of inadvertently exposing tenant data even at the hands of a malicious developer is almost completely mitigated.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Google Cloud doesn't offer the same functionality out of the box:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Endpoints and API Gateway don't support custom authorizers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dynamically generated IAM policies aren't supported.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The proposed solutions you'll find on StackOverflow, Reddit and even GCP's own whitepapers all basically say the same thing: "Tenant security should be handled at the app level."&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Yuck!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But after days of trial and error, I found a solution that gave us the highly secure tenant isolation we needed on Google Cloud!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvzsmc20goa0htpojaks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvzsmc20goa0htpojaks.png" alt="Strategy to fully isolate tenants in an multi-tenant environment on Google Cloud using a Token Vending Machine" width="800" height="739"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly as before, the user in Tenant A &lt;em&gt;(1)&lt;/em&gt; makes an authorized request to list the users in their tenant &lt;em&gt;(2)&lt;/em&gt;. The API Gateway passes that to the &lt;code&gt;UsersEndpoint&lt;/code&gt; service &lt;em&gt;(3)&lt;/em&gt; that has no inherit permission to access any database, so it passes the user's auth token to the &lt;code&gt;TokenVendingMachine&lt;/code&gt; &lt;em&gt;(4)&lt;/em&gt;. The &lt;code&gt;TokenVendingMachine&lt;/code&gt; validates the token and based on the custom claims retrieves the tenant's Service Account key file from our secure bucket &lt;em&gt;(5)&lt;/em&gt; and returns it to the &lt;code&gt;UsersEndpoint&lt;/code&gt; service.  Finally we can call our database using the key file &lt;em&gt;(6)&lt;/em&gt; and return the results to the user.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Onboarding
&lt;/h4&gt;

&lt;p&gt;When a new tenant is created, a tenant-specific Service Account is asynchronously created and the JSON key file is stored in a highly-secured bucket containing tenant key files.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 2: Authentication
&lt;/h4&gt;

&lt;p&gt;We use the Identity Platform with multi-tenancy enabled to authenticate users.  When a user logs in they exchange their initial token with a custom token containing custom claims such as the user's tenant and role, and that custom token is sent with every subsequent request.&lt;/p&gt;

&lt;p&gt;Those custom claims look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  tn: 'tn-xyz987', // Tenant ID
  rl: 'editor', // Role
  rg: 1, // Region
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The claims identify the user's tenant, their role and the region that their data resides in.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 3: API Requests
&lt;/h4&gt;

&lt;p&gt;When a user's authenticated request hits the API Gateway, it's sent to a Cloud Run service that runs our API.  The database and storage buckets are abstracted behind like-named services and require a valid JSON key file in order to access any resource.&lt;/p&gt;

&lt;p&gt;So if a user requests a list of users within their tenant, the API's code can be as simple as this pseudocode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.run('/users', (res: Request, res: Response) =&amp;gt; {
  // Create a new instance of our TokenVendingMachine class
  const tvm = new TokenVendingMachine();

  // Request the key file using the user's auth token
  tvm.get(req.headers.authorization)
    .then(async (key: Credentials) =&amp;gt; {
      // The tenant's database name has been embedded in the key
      const db = new Database(key);

      const rows = await db.query("SELECT ...");

      res.json(rows);
    })
    .catch((e: any) =&amp;gt; res.status(403));
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Main Takeaway:&lt;/strong&gt; The developers can write code as if this is a single-tenant environment!&lt;/p&gt;

&lt;h2&gt;
  
  
  I know what you're going to say...
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why not issue short lived service account credentials?&lt;/strong&gt;&lt;br&gt;
Latency. Retrieving an existing key file from a GCS bucket is extremely fast compared to requesting new credentials on each request. Sure you could cache those short-lived credentials, but it creates a new set of problems of storing those securely if your goal is total isolation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why not use the Secrets Manager to store the key files?&lt;/strong&gt;&lt;br&gt;
In a word, cost.  At $0.03 per 10,000 operations the costs will add up fast for an API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isn't a storage bucket full of key files dangerous?&lt;/strong&gt;&lt;br&gt;
Not if properly secured.  The &lt;code&gt;TokenVendingMachine&lt;/code&gt; service has read only access to all objects in that bucket and another service that generates the key file during the onboarding process has write access. There's also have a backend service that regularly cycles the keys so that they don't live on in perpetuity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;What's important is that by separating tenant security from the app level, we achieve reliable, secure storage and access of our customers' data while removing the responsibility of tenant security from our developers' hands.&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>security</category>
      <category>webdev</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Cross-Domain Firebase Authentication: A Simple Approach</title>
      <dc:creator>Brian Burton</dc:creator>
      <pubDate>Mon, 15 Feb 2021 08:24:26 +0000</pubDate>
      <link>https://dev.to/brianburton/cross-domain-firebase-authentication-a-simple-approach-337k</link>
      <guid>https://dev.to/brianburton/cross-domain-firebase-authentication-a-simple-approach-337k</guid>
      <description>&lt;p&gt;If you're reading this you've probably just discovered that Firebase Auth only authenticates for a single domain, yet you need to share that authentication across domains and subdomains and not sure where to start.&lt;/p&gt;

&lt;p&gt;I was in the same boat and discovered &lt;a href="https://dev.to/johncarroll/how-to-share-firebase-authentication-across-subdomains-1ka8"&gt;one developer's approach&lt;/a&gt; that helped to point me in the right direction.  Here is a simpler approach that's just as secure but with less plumbing and allows the user to authenticate through any of your subdomains.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does it Work?
&lt;/h2&gt;

&lt;h4&gt;
  
  
  I. Initial Authentication
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyqme1cvdl23zd5emudk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyqme1cvdl23zd5emudk.png" alt="Firebase Auth Step 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;First the user authenticates using Firebase Auth to obtain an ID Token on the client side on any subdomain.  For this example the user authenticates through &lt;code&gt;app1.domain.com&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The user then sends that ID Token via POST to the Cloud Functions endpoint &lt;code&gt;/auth/login&lt;/code&gt; on the same subdomain, &lt;code&gt;app1.domain.com&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Firebase Hosted website rewrites &lt;code&gt;/auth&lt;/code&gt; to the &lt;code&gt;AuthFunction&lt;/code&gt; Cloud Function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;AuthFunction&lt;/code&gt; takes the ID token, verifies it using &lt;code&gt;verifyIdToken()&lt;/code&gt; and then calls &lt;code&gt;createSessionCookie()&lt;/code&gt; and assigns that value to the &lt;code&gt;__session&lt;/code&gt; cookie with the domain &lt;code&gt;.domain.com&lt;/code&gt; giving it access to all of the subdomains of the requesting domain.  The cookie should have &lt;code&gt;httpOnly=true&lt;/code&gt;, &lt;code&gt;secure=true&lt;/code&gt; and &lt;code&gt;sameSite=strict&lt;/code&gt; set.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Note: &lt;code&gt;__session&lt;/code&gt; is the only cookie name that Firebase Hosting allows you to use. Any other cookies get stripped. &lt;a href="https://firebase.google.com/docs/hosting/manage-cache#using_cookies" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  II. Cross-Domain Authentication
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi16h943bz01en3npw7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffi16h943bz01en3npw7y.png" alt="Firebase Auth Cross-Domain Authentication"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Assume the user is now authenticated on &lt;code&gt;app1.domain.com&lt;/code&gt; but we need the user to be authenticated on &lt;code&gt;app2.domain.com&lt;/code&gt;.  First the user checks if it's authenticated with Firebase Auth client side, then it makes a &lt;code&gt;GET&lt;/code&gt; request to &lt;code&gt;https://app2.domain.com/auth/status&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Firebase Hosted website rewrites &lt;code&gt;/auth&lt;/code&gt; to the communal &lt;code&gt;AuthFunction&lt;/code&gt; Cloud Function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;/auth/status&lt;/code&gt; endpoint checks for the existence of the &lt;code&gt;__session&lt;/code&gt; cookie, then validates the session cookie value with &lt;code&gt;verifySessionCookie()&lt;/code&gt;.  If valid it calls &lt;code&gt;createCustomToken(&amp;lt;uid&amp;gt;)&lt;/code&gt; and returns the custom token to the client.  If not it returns a 401 error and clears the &lt;code&gt;__session&lt;/code&gt; cookie.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If a &lt;code&gt;200&lt;/code&gt; status code is returned, the client takes the custom token and calls Firebase's &lt;code&gt;signInWithCustomToken()&lt;/code&gt; passing the custom token. Now the user is authenticated on &lt;code&gt;app2.domain.com&lt;/code&gt;.&lt;br&gt;
If a &lt;code&gt;401&lt;/code&gt; status code is returned, the client logs out through Firebase Auth and is sent to the login page.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Destroying a Session
&lt;/h2&gt;

&lt;p&gt;Finally when a user logs out, the &lt;code&gt;/auth/logout&lt;/code&gt; endpoint should be called to perform two actions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Clear the &lt;code&gt;__session&lt;/code&gt; coookie.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optionally call &lt;code&gt;revokeRefreshTokens(&amp;lt;uid&amp;gt;)&lt;/code&gt; to revoke all tokens for that user across all devices.  Otherwise if you want to revoke authentication on a single device you'll need to monitor for the presence of the &lt;code&gt;__session&lt;/code&gt; cookie and if it vanishes the user should be logged out using Firebase Auth.  The latter requires that the cookie's &lt;code&gt;httpOnly&lt;/code&gt; property is set to &lt;code&gt;false&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;To be continued with code examples.&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>firstpost</category>
      <category>firebase</category>
      <category>authentication</category>
    </item>
  </channel>
</rss>
