<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Scott P</title>
    <description>The latest articles on DEV Community by Scott P (@chronsyn).</description>
    <link>https://dev.to/chronsyn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/chronsyn"/>
    <language>en</language>
    <item>
      <title>Scaling Postgres COPY</title>
      <dc:creator>Scott P</dc:creator>
      <pubDate>Sun, 26 Sep 2021 19:30:01 +0000</pubDate>
      <link>https://dev.to/chronsyn/scaling-postgres-copy-1oo6</link>
      <guid>https://dev.to/chronsyn/scaling-postgres-copy-1oo6</guid>
      <description>&lt;p&gt;This post will be fairly quick, and covers how you can significantly reduce the amount of time a Postgres &lt;code&gt;COPY&lt;/code&gt; command takes to complete.&lt;/p&gt;

&lt;p&gt;Not too long ago, I managed to finish building out the first revision of my UK transport data platform.&lt;/p&gt;

&lt;p&gt;During the time I was building it, I came across numerous challenges but perhaps one of the most critical - How do I improve the performance of data import?&lt;/p&gt;

&lt;p&gt;Allow me to provide some context. I provide transport data for buses, trains and London underground via a GraphQL API. I source my data from numerous official providers, including National Rail, Network Rail, and the UK government open data sources. They provide me with huge datasets such as bus stop locations, transit schedules, etc. Every day, I retrieve this data and import it into a Postgres database.&lt;/p&gt;

&lt;p&gt;One of my these datasets has over 38 million rows, and so ensuring that I can import that as quickly as possible is important for maintaining the general uptime of the API.&lt;/p&gt;

&lt;p&gt;Here's a few quick tips which can significantly improve performance when importing datasets using Postgres &lt;code&gt;COPY&lt;/code&gt; command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Delete indexes before importing&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Indexes provide a way for Postgres to quickly locate data in tables. You can run a database without them, but they help improve read performance significantly on large datasets. The downside of them is that they drastically reduce the performance of inserts. Removing them before you begin an import, and recreating them after the import is finished, reduces the need for the database to update indexes at the same time as it's handling the data import.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Set tables to unlogged before importing&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a table is set to &lt;code&gt;LOGGED&lt;/code&gt;, it will write changes made to that table in the WAL (Write-Ahead Log). This helps ensure data consistency, and reduces the chance that a table will be corrupted. Unfortunately, much like indexes, this also reduces performance significantly when writing to the table. Setting a table to &lt;code&gt;UNLOGGED&lt;/code&gt; before you begin the import, and then setting it to &lt;code&gt;LOGGED&lt;/code&gt; after the import completes can improve performance significantly.&lt;/p&gt;

&lt;p&gt;Let's cover some numbers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Importing the data with indexes in-place, and tables set to logged&lt;/strong&gt;: Approximately 27 minutes to import data to 6 tables&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Importing the data without indexes, and tables set to unlogged&lt;/strong&gt;: Approximately 10 minutes to import data to 6 tables&lt;/p&gt;

&lt;p&gt;The creation of several indexes on the largest table after data import has completed takes around 3 minutes to complete. If I opted not to recreate the indexes, the entire import process would be completed in 7 minutes.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>supabase</category>
      <category>scaling</category>
      <category>performance</category>
    </item>
    <item>
      <title>Self-hosting with Supabase</title>
      <dc:creator>Scott P</dc:creator>
      <pubDate>Thu, 22 Jul 2021 01:15:56 +0000</pubDate>
      <link>https://dev.to/chronsyn/self-hosting-with-supabase-1aii</link>
      <guid>https://dev.to/chronsyn/self-hosting-with-supabase-1aii</guid>
      <description>&lt;p&gt;&lt;strong&gt;Please note that this article is a work in progress. Please leave any suggestions or questions in the comments.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Relevant materials:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://supabase.io/docs/guides/self-hosting#configuring-each-service" rel="noopener noreferrer"&gt;https://supabase.io/docs/guides/self-hosting#configuring-each-service&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  What is Supabase?
&lt;/h4&gt;

&lt;p&gt;Supabase is an open-source database solution based on Postgres. It includes all the standard features of Postgres with some killer additions such as realtime streams and a REST API. These are all backed up by incredibly powerful libraries.&lt;/p&gt;

&lt;h4&gt;
  
  
  Pros and cons of self-hosting
&lt;/h4&gt;

&lt;p&gt;There are many reasons you might want to self-host Supabase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;More database space - by using your own VPS, you could have a database that is many times bigger than what is (currently) available via Supabase managed platform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Full infrastructure control - you're in control of everything so if you need to make a change, you have that option&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are also some reasons you may want to let Supabase host your database instead of self-hosting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It's already setup - You don't have to have a significant level of technical knowledge to setup a Supabase instance on their platform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Costs - They offer a free plan which may be enough for your needs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Who is this article for?
&lt;/h4&gt;

&lt;p&gt;This article was written for people who want to self-host Supabase but have perhaps gotten lost with the self-hosting guides out there. Please be aware that at this time, the self-hosted options don't include the dashboard. If that's a deal breaker for you, this guide isn't for you.&lt;/p&gt;

&lt;p&gt;To get the most out of this article, it is strongly advised that you have some basic experience with the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nginx - Configuring endpoints / proxies&lt;/li&gt;
&lt;li&gt;Docker - Deploying, configuring Dockerfiles&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even if you're lacking those skills, I've done my best to explain the key parts, as well as the files you need to change.&lt;/p&gt;

&lt;p&gt;We will be using Portainer to manage our Docker containers. We'll also be using our service behind Cloudflare. For this, we'll use &lt;code&gt;db.example.com&lt;/code&gt; as our domain name. This will be mentioned later in the Nginx configuration section.&lt;/p&gt;

&lt;h4&gt;
  
  
  Getting setup
&lt;/h4&gt;

&lt;p&gt;The first thing you should do is setup somewhere to host. I personally recommend DigitalOcean as almost everything can be done from the web dashboard. For the purpose of this article, if you see the word 'Droplet', this simply refers to a VPS which will host our Supabase instance.&lt;/p&gt;

&lt;p&gt;Supabase provide some one-click deploys for several platforms, but we won't be using those as they don't contain the full Supabase experience.&lt;/p&gt;

&lt;p&gt;Instead, we'll be installing Docker. Several VPS providers already have 'ready to deploy' Docker instances. I'd recommend using one as this can save you a significant amount of time compared to setting Docker up manually.&lt;/p&gt;

&lt;p&gt;Assuming you have setup a Droplet / VPS with Docker running, the first thing to do is install Portainer. To do this, follow the guide at &lt;a href="https://documentation.portainer.io/v2.0/deploy/ceinstalldocker/" rel="noopener noreferrer"&gt;https://documentation.portainer.io/v2.0/deploy/ceinstalldocker/&lt;/a&gt; . As of writing, there are only 3 steps needed to get Portainer installed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SSH into your VPS&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;docker volume create portainer_data&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, you need to perform some basic configuration for Portainer. In your web browser, go to &lt;code&gt;http://&amp;lt;ip-of-your-vps&amp;gt;:9000&lt;/code&gt; and follow the on-screen instructions.&lt;/p&gt;

&lt;p&gt;Next, we need to setup Nginx. Fortunately, this is very easy with Portainer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Optional tip before continuing:&lt;/strong&gt; Setup 2 volumes, &lt;code&gt;nginx-config&lt;/code&gt; and &lt;code&gt;nginx-data&lt;/code&gt;, and then use those from the &lt;code&gt;volumes&lt;/code&gt; tab when setting up Nginx.&lt;/p&gt;

&lt;p&gt;From the left-menu of the Portainer dashboard, select 'App Templates' and from the list, select 'Nginx', and then follow the instructions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Deploying via Portainer
&lt;/h4&gt;

&lt;p&gt;If you read this guide originally, I suggested creating a fork of the supabase repo. This is no longer what I recommend, as a better and more streamlined approach is available.&lt;/p&gt;

&lt;p&gt;Going back to the Portainer web dashboard, select 'Stacks' from the left menu, and then click on 'Add Stack'.&lt;/p&gt;

&lt;p&gt;Give your stack a name, and select 'Git repository'. For the repository URL, enter the full URL to the supabase repo (&lt;a href="https://github.com/supabase/supabase" rel="noopener noreferrer"&gt;https://github.com/supabase/supabase&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;In the 'Compose path' field, enter &lt;code&gt;/docker/docker-compose.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Copy the contents of the .env file (&lt;a href="https://github.com/supabase/supabase/blob/master/docker/.env.example" rel="noopener noreferrer"&gt;https://github.com/supabase/supabase/blob/master/docker/.env.example&lt;/a&gt;) to a blank file, and update the following values with valid details:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OPERATOR_TOKEN=your-super-secret-operator-token

JWT_SECRET=your-super-secret-jwt-token-with-at-least-32-characters-long

POSTGRES_PASSWORD=your-super-secret-and-long-postgres-password

# some SMTP server to send your auth-mails with
SMTP_HOST=mail.example.com
SMTP_PORT=
SMTP_USER=
SMTP_PASS=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Under 'Environment variables' click on 'Advanced mode', and then copy and paste the contents of your &lt;code&gt;.env&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Finally, click 'Deploy the stack' at the bottom. This process might take a few minutes to finish. Once it is completed, click 'Containers' from the Portainer menu. You may notice that supabase-auth isn't running initially. Simply select it, and then click 'Start'. It should then run without any issues.&lt;/p&gt;

&lt;p&gt;The final step for this section is configuring the hostname for each Supabase service. From the containers menu item in the Portainers dashboard, select each Supabase container. Click on 'Duplicate/edit' near the top of the page. Then, scroll to the bottom of the page, select the 'Network' tab under 'Advanced container settings'. In the 'Hostname' field, enter the name of the service. For example, for the &lt;code&gt;supabase-rest&lt;/code&gt; container, set the hostname to &lt;code&gt;rest&lt;/code&gt;. You can set this to anything you like, but I recommend keeping it relevant to the container. Finally, click 'Deploy this container'. It will warn you that a container with the same name already exists - click 'OK'. After a short while, the container will be redeployed with the new hostname. Repeat these steps for each Supabase container.&lt;/p&gt;

&lt;p&gt;By doing this, it will help Nginx connect to the container and make it easier to identify in other parts of the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One very important note regarding the supabase-auth container:&lt;/strong&gt; By default, the &lt;code&gt;DATABASE_URL&lt;/code&gt; will include &lt;code&gt;sslmode=disable&lt;/code&gt;. The equals sign in this value may cause errors. For example, if you try to call the &lt;code&gt;.signUp()&lt;/code&gt; method from the supabase-js library, you may get an error 500. The reason for this is that it will attempt to connect to &lt;code&gt;disable&lt;/code&gt; instead of &lt;code&gt;postgres://&amp;lt;db_user&amp;gt;:&amp;lt;password&amp;gt;@db:&amp;lt;db_port&amp;gt;/postgres?sslmode=disable&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The solution to this is to ensure that you remove &lt;code&gt;sslmode=disable&lt;/code&gt; and any other query parameters from the URL.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configuring Nginx
&lt;/h4&gt;

&lt;p&gt;In order to setup Nginx to support the various services needed for Supabase, we need to modify our Nginx config.&lt;/p&gt;

&lt;p&gt;To do this, you need to SSH into your server, and then &lt;code&gt;cd&lt;/code&gt; your way to the volume where your Nginx config is stored. If you set up a &lt;code&gt;nginx-config&lt;/code&gt; volume as mentioned above, this will (usually) be located at &lt;code&gt;/var/lib/docker/volumes/nginx-config/_data&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;From here, we need to modify 2 files. The first is &lt;code&gt;nginx.conf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Inside the &lt;code&gt;http&lt;/code&gt; section, add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

upstream websocket {
    server realtime:4000;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The other file we need to modify is &lt;code&gt;conf.d/default.conf&lt;/code&gt;. There's many ways you can configure Nginx, but for the sake of simplicity, we're going to keep this to a single file.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;conf.d/default.conf&lt;/code&gt;, replace the entire contents with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen 443 ssl;
    server_name db.example.com;

    # REST
    location ~ ^/rest/v1/(.*)$ {
        proxy_set_header Host $host;
        proxy_pass http://kong:8000;
        proxy_redirect off;
    }

    # AUTH
    location ~ ^/auth/v1/(.*)$ {
        proxy_set_header Host $host;
        proxy_pass http://kong:8000;
        proxy_redirect off;
    }

    # REALTIME
    location ~ ^/realtime/v1/(.*)$ {
        proxy_redirect off;
        proxy_pass http://kong:8000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me explain what's going on here.&lt;/p&gt;

&lt;p&gt;Firstly, our REST handler is listening to all requests sent to &lt;code&gt;/rest/v1&lt;/code&gt; path, as well as anything added after that part of the path. It is then routing it to the Supabase Kong server, which by default will be accessible internally at &lt;code&gt;http://kong:8000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Likewise, our auth handler is listening for requests to &lt;code&gt;/auth/v1&lt;/code&gt;, and routing them also to Kong.&lt;/p&gt;

&lt;p&gt;Finally, the realtime subscriptions are handled by the &lt;code&gt;/realtime/v1&lt;/code&gt; route. Again, this routes to Kong.&lt;/p&gt;

&lt;p&gt;If you read this guide when it was originally written, I told you to route requests to each individual service. After further investigation, it appears that routing to Kong is the better solution.&lt;/p&gt;

&lt;p&gt;The reason we perform this proxying is that it allows us to ensure that requests from the Supabase libraries (e.g. &lt;code&gt;@supabase/supabase-js&lt;/code&gt;) are able to correctly route to the different services in the stack.&lt;/p&gt;

&lt;p&gt;Kong will handle routing each request to it's appropriate service based upon the URL, while Nginx is providing a way to expose these endpoints under a single URL.&lt;/p&gt;

&lt;p&gt;After you've made the changes, save the file, and restart the Nginx container from the Portainer dashboard. To make sure everything is working as expected, select the Nginx container and then 'View logs'. It should indicate that everything is running as expected.&lt;/p&gt;

&lt;p&gt;Finally, from the Nginx container (in the Portainer dashboard), connect it to the &lt;code&gt;supabase_default&lt;/code&gt; network - the option to do this is found at the bottom of that page in the dashboard.&lt;/p&gt;

&lt;h4&gt;
  
  
  Necessary Kong configuration
&lt;/h4&gt;

&lt;p&gt;From Portainer, create a new volume from the 'Volumes' menu option. Name the new volume &lt;code&gt;kong-data&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then, go back to 'containers', and select the 'supabase-kong' container. Click on 'Duplicate/Edit' at the top.&lt;/p&gt;

&lt;p&gt;Scroll to the bottom of the page and click 'Volumes'. Click 'Map additional volume'. In the field marked 'Container', enter &lt;code&gt;/var/lib/kong&lt;/code&gt;. Then click on the 'Volume' button to the right side of it.&lt;/p&gt;

&lt;p&gt;From the dropdown field underneath, select the 'kong-data' volume you created a moment ago. Then click 'Deploy this container'.&lt;/p&gt;

&lt;h4&gt;
  
  
  GoTrue / auth environment variables
&lt;/h4&gt;

&lt;p&gt;By default, the environment variables for each container should work without any issues, but the GoTrue / auth container may require some additional configuration.&lt;/p&gt;

&lt;p&gt;For the sake of simplicity, here is an example environment config you can use in the supabase-auth container - edit it to fit your needs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;GOTRUE_OPERATOR_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-super-secret-operator-token
&lt;span class="nv"&gt;GOTRUE_JWT_DEFAULT_GROUP_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;authenticated

&lt;span class="c"&gt;# Make sure this JWT secret matches what was configured during setup&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_JWT_SECRET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-super-secret-jwt-token-with-at-least-32-characters-long

&lt;span class="c"&gt;# How long should JWT tokens be valid for?&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_JWT_EXP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3600

&lt;span class="c"&gt;# Since Supabase is based on Postgres, you shouldn't need to change this&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_DB_DRIVER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres

&lt;span class="c"&gt;# What schema should requests be routed to?&lt;/span&gt;
&lt;span class="c"&gt;# There should be no reason to change this&lt;/span&gt;
&lt;span class="nv"&gt;DB_NAMESPACE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;auth

&lt;span class="c"&gt;# Where is our auth/GoTrue located&lt;/span&gt;
&lt;span class="c"&gt;# You shouldn't need to change these unless the ports are mapped differently&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_API_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0.0.0.0
&lt;span class="nv"&gt;PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;9999

&lt;span class="c"&gt;# Email settings&lt;/span&gt;
&lt;span class="c"&gt;# You must set these if you want to be able to send emails&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_SMTP_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;smtp.your-email-host.com
&lt;span class="nv"&gt;GOTRUE_SMTP_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;465
&lt;span class="nv"&gt;GOTRUE_SMTP_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-smtp-user
&lt;span class="nv"&gt;GOTRUE_SMTP_PASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-smtp-password

&lt;span class="c"&gt;# Should users be required to confirm their email address before they can log in?&lt;/span&gt;
&lt;span class="c"&gt;# If set to false, users won't have to confirm their registration&lt;/span&gt;
&lt;span class="c"&gt;# If set to true, users will have to click the link in their email to confirm&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_MAILER_AUTOCONFIRM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;

&lt;span class="c"&gt;# What is the 'from' address that emails are sent from?&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_SMTP_ADMIN_EMAIL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noreply@example.com

&lt;span class="c"&gt;# Remove this if you don't want debug logs&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_LOG_LEVEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;debug

&lt;span class="c"&gt;# The connection string for your database&lt;/span&gt;
&lt;span class="c"&gt;# `@db` says we're looking for the container called 'db' on our docker network&lt;/span&gt;
&lt;span class="nv"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgres://postgres:your-super-secret-and-long-postgres-password@db:5432/postgres

&lt;span class="c"&gt;# Email templates&lt;/span&gt;
&lt;span class="c"&gt;# Invite user - provide a URL to a HTML or Text template&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_MAILER_TEMPLATES_INVITE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://example.com/path/to/your/invite/template.html

&lt;span class="c"&gt;# Confirm registration - provide a URL to a HTML or Text template&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_MAILER_TEMPLATES_CONFIRMATION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://example.com/path/to/your/confirmation/template.html

&lt;span class="c"&gt;# Password recovery - provide a URL to a HTML or Text template&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_MAILER_TEMPLATES_RECOVERY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://example.com/path/to/your/password_reset/template.HTML

&lt;span class="c"&gt;# Magic link - provide a URL to a HTML or Text template&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_MAILER_TEMPLATES_MAGIC_LINK&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://example.com/path/to/your/magic_link/template.html

&lt;span class="c"&gt;# GoTrue URLs&lt;/span&gt;
&lt;span class="c"&gt;# These are appended after the API_EXTERNAL_URL&lt;/span&gt;
&lt;span class="c"&gt;# You shouldn't need to change these&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_MAILER_URLPATHS_CONFIRMATION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/auth/v1/verify
&lt;span class="nv"&gt;GOTRUE_MAILER_URLPATHS_INVITE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/auth/v1/verify
&lt;span class="nv"&gt;GOTRUE_MAILER_URLPATHS_CONFIRMATION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/auth/v1/verify
&lt;span class="nv"&gt;GOTRUE_MAILER_URLPATHS_RECOVERY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/auth/v1/verify

&lt;span class="c"&gt;# Site URLs&lt;/span&gt;
&lt;span class="c"&gt;# This is where the user will be redirected to after clicking a link in an email and after oAuth&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_SITE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://example.com/redirect_to_here
&lt;span class="nv"&gt;GOTRUE_URI_ALLOW_LIST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://example.com/redirect_to_here

&lt;span class="c"&gt;# This is the URL where your supabase stack is accessible&lt;/span&gt;
&lt;span class="c"&gt;# i.e. this is the endpoint URL you would pass into a `createClient()` call in the supabase-js library&lt;/span&gt;
&lt;span class="nv"&gt;API_EXTERNAL_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://database.example.com/

&lt;span class="c"&gt;# Set this to true if you want to prevent signing up with email and password&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_DISABLE_SIGNUP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# oAuth&lt;/span&gt;
&lt;span class="c"&gt;# If you are not using oAuth to login (e.g. Login with Facebook), you can ignore the below&lt;/span&gt;
&lt;span class="c"&gt;# If you want to disable oAuth for a specific provider, set the `GOTRUE_EXTERNAL_&amp;lt;provider&amp;gt;_ENABLED` to false&lt;/span&gt;
&lt;span class="c"&gt;# Github oAuth&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_EXTERNAL_GITHUB_CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your_github_client_id
&lt;span class="nv"&gt;GOTRUE_EXTERNAL_GITHUB_SECRET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your_github_client_secret
&lt;span class="nv"&gt;GOTRUE_EXTERNAL_GITHUB_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Google oAuth&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-google-client-id.apps.googleusercontent.com
&lt;span class="nv"&gt;GOTRUE_EXTERNAL_GOOGLE_SECRET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-google-secret
&lt;span class="nv"&gt;GOTRUE_EXTERNAL_GOOGLE_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Facebook oAuth&lt;/span&gt;
&lt;span class="nv"&gt;GOTRUE_EXTERNAL_FACEBOOK_CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-facebook-client-id
&lt;span class="nv"&gt;GOTRUE_EXTERNAL_FACEBOOK_SECRET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-facebook-app-secret
&lt;span class="nv"&gt;GOTRUE_EXTERNAL_FACEBOOK_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Add other oAuth provider details below&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Final checks
&lt;/h4&gt;

&lt;p&gt;At this point, everything should be running as expected.&lt;/p&gt;

&lt;p&gt;You should be able to connect to your Postgres server using your VPS IP address, port 5432, and using the username &lt;code&gt;postgres&lt;/code&gt; and the postgres password you setup inside the environmental variables.&lt;/p&gt;

&lt;h4&gt;
  
  
  Closing notes
&lt;/h4&gt;

&lt;p&gt;As stated previously, this article is a work in progress. I am still working through some of the finer details of configuring the Supabase stack for self-hosting. There might be some things missing from the article - leave a comment to let me know, and I'll look into adding it.&lt;/p&gt;

&lt;p&gt;Setting up Supabase for self-hosting is by no means a quick task, and there's still room for improvement. Hopefully, this article will help speed up the process for you and set you off on the journey.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>supabase</category>
      <category>nginx</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Server-side MobX</title>
      <dc:creator>Scott P</dc:creator>
      <pubDate>Sat, 19 Dec 2020 12:32:18 +0000</pubDate>
      <link>https://dev.to/chronsyn/server-side-mobx-2aog</link>
      <guid>https://dev.to/chronsyn/server-side-mobx-2aog</guid>
      <description>&lt;h3&gt;
  
  
  What?
&lt;/h3&gt;

&lt;p&gt;I know what you're thinking - "State management on a server? Shouldn't servers be stateless?"&lt;/p&gt;

&lt;p&gt;Today, I'm going to jump through a few use-cases for having server-side state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why?
&lt;/h3&gt;

&lt;p&gt;We've been taught that servers, in general, should be stateless, and that anything your server needs should be stored in a database or a config file.&lt;/p&gt;

&lt;p&gt;What happens if you want to dynamically control these? For example, let's say you have some scheduled tasks running in your app. You could potentially use Cron (or one of the Cron libraries for a code-based solution). If you want to change them, that's either updating a config, or doing a new deploy with new code.&lt;/p&gt;

&lt;p&gt;Likewise, if you want to manage which GraphQL resolvers are enabled, you'd have to go through the same process.&lt;/p&gt;

&lt;p&gt;With the rise of services like DigitalOcean's App Platform, neither of these are ideal solutions as they either require changing the config inside a docker container (requiring reconfiguring each time you redeploy), or they require you to redeploy, using some of those crucial build minutes and fixing up any issues that might crop up.&lt;/p&gt;

&lt;p&gt;What if I told you there was an easier way? For this discussion, I'm going to be using FaunaDB, combined with MobX 6.0 to create dynamic config management in a Node server. I won't be covering FaunaDB integration in this article, as you could use any database solution, or even just have a remote file with your config stored.&lt;/p&gt;

&lt;h3&gt;
  
  
  How?
&lt;/h3&gt;

&lt;p&gt;For this example, I'm going to use the scheduled tasks configuration. My use case is retrieving tweets from the National Rail enquiries Twitter account, which provides train delay information in the UK.&lt;/p&gt;

&lt;p&gt;However, I only want to run that task if I've enabled it. Retrieving tweets is a typical use case for many applications, but consider it as just an example for the sake of this article.&lt;/p&gt;

&lt;p&gt;The first thing to do is create a MobX store. This is just a class, with some properties which are marked as &lt;code&gt;@observable&lt;/code&gt;, an &lt;code&gt;@action&lt;/code&gt; to update the state, and some &lt;code&gt;@computed&lt;/code&gt; getters to retrieve single fields from my state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@Modules/Logging/logging.module&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;observable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;computed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;makeAutoObservable&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mobx&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;IFaunaDbEnvironmentScheduledTask&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./interfaces&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;enum&lt;/span&gt; &lt;span class="nx"&gt;EScheduledTask&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;fetch_and_import_national_rail_delay_tweets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;fetch_and_import_national_rail_delay_tweets&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;RScheduledTask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Record&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;EScheduledTask&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;IFaunaDbEnvironmentScheduledTask&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;IFaunaDbEnvironmentConfig&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;myapi&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;scheduled_tasks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;RScheduledTask&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;EnvironmentConfigStore&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;makeAutoObservable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;observable&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nx"&gt;EnvironmentConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IFaunaDbEnvironmentConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;action&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nx"&gt;setConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IFaunaDbEnvironmentConfig&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;debug&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`Server config loaded to store successfully!`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;EnvironmentConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;computed&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;get&lt;/span&gt; &lt;span class="nf"&gt;scheduledTasks&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;EnvironmentConfig&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;myapi&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;scheduled_tasks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;EnvironmentConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;EnvironmentConfigStore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;EnvironmentConfig&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, I've defined an interface for my state (which matches the structure of a document stored in FaunaDB), created a state store class, and decorated my properties. This is all fairly standard for MobX. I've also used &lt;code&gt;makeAutoObservable&lt;/code&gt; in my constructor. I've also got a &lt;code&gt;logger.log&lt;/code&gt; call in there - this is just a standard Winston logger class.&lt;/p&gt;

&lt;p&gt;The next step is to use a MobX &lt;code&gt;reaction&lt;/code&gt; to monitor my scheduled task. I do this in a separate file because writing modular code is something that you should try to do where possible:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;reaction&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mobx&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;EnvironmentConfigStore&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@Stores/store.environmentConfig&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;logger&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@Modules/Logging/logging.module&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;timer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;NodeJS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Timeout&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;disableTimer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;clearInterval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;timer&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// Check if the task is enabled&lt;/span&gt;
&lt;span class="c1"&gt;// Disables timer if not&lt;/span&gt;
&lt;span class="nf"&gt;reaction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;EnvironmentConfigStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scheduledTasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fetch_and_import_national_rail_delay_tweets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;enabled&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;enabled&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;debug&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;`fetch_and_import_national_rail_delay_tweets is now &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;enabled&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;enabled&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;disabled&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;!`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;enabled&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nf"&gt;disableTimer&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;timer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;setInterval&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Task would run now!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="nx"&gt;EnvironmentConfigStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scheduledTasks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fetch_and_import_national_rail_delay_tweets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;frequency_in_ms&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;fireImmediately&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;onError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Error in reaction: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What we're doing here is creating a &lt;code&gt;reaction&lt;/code&gt; which will trigger each time the &lt;code&gt;scheduledTasks.fetch_and_import_national_rail_delay_tweets.enabled&lt;/code&gt; property changes.&lt;/p&gt;

&lt;p&gt;If the property changes to &lt;code&gt;enabled: false&lt;/code&gt;, we stop our timer, otherwise, we start our timer. You can see that I currently only have a &lt;code&gt;console.log("Task would run now!")&lt;/code&gt; as my function for the timer, but you can do whatever you wish to do in there.&lt;/p&gt;

&lt;p&gt;Since the reaction only runs when the value changes, the timer will only be created when the value is set to &lt;code&gt;true&lt;/code&gt;, and only cleared if the value changes to &lt;code&gt;false&lt;/code&gt; - to clarify: You will not have multiple timers running if you use &lt;code&gt;reaction&lt;/code&gt; in this way.&lt;/p&gt;

&lt;p&gt;The final step is to get the config from FaunaDB, and update the store:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;EnvironmentConfigStore&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@Modules/Stores/store.environmentConfig&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;doSomethingThatRetrievesConfig&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;myConfig&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;EnvironmentConfigStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;myConfig&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, I retrieve the config from FaunaDB and then update the store. You could run this in a timer to retrieve it every so often, or you could subscribe to the document instead - the process is the same in either case.&lt;/p&gt;

&lt;p&gt;That's all there is to it. Whenever I update the document which contains my server config on FaunaDB, this is propagated to the store, which then handles enabling or disabling the timer for the scheduled task.&lt;/p&gt;

&lt;p&gt;You can integrate this in any way that feels right for your codebase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other use cases
&lt;/h3&gt;

&lt;p&gt;There's potentially unlimited use cases for this. Here's just a few:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dynamically enabling or disabling GraphQL resolvers&lt;/li&gt;
&lt;li&gt;Marking a server as production, staging, local, etc&lt;/li&gt;
&lt;li&gt;Enabling or disabling access to routes dynamically&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final notes
&lt;/h3&gt;

&lt;p&gt;If you want to be able to configure your server at runtime, and serverless isn't a suitable use case for your project, then having some sort of state management becomes necessary. The beauty of this method is that it works with any database system. You could potentially just store the config in a file somewhere and periodically retrieve that file instead, but you've got to ensure you have the security you need around it.&lt;/p&gt;

&lt;p&gt;To reiterate, my use case was based on DigitalOcean App Platform, and I wanted an easy way to manage scheduled tasks (amongst some other server config, which isn't covered here).&lt;/p&gt;

</description>
      <category>mobx</category>
      <category>javascript</category>
      <category>typescript</category>
      <category>node</category>
    </item>
  </channel>
</rss>
