<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nhost</title>
    <description>The latest articles on DEV Community by Nhost (@nhost).</description>
    <link>https://dev.to/nhost</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nhost"/>
    <language>en</language>
    <item>
      <title>Individual PostgreSQL instances to everyone</title>
      <dc:creator>Johan Eliasson</dc:creator>
      <pubDate>Mon, 26 Sep 2022 18:02:22 +0000</pubDate>
      <link>https://dev.to/nhost/individual-postgresql-instances-to-everyone-93p</link>
      <guid>https://dev.to/nhost/individual-postgresql-instances-to-everyone-93p</guid>
      <description>&lt;p&gt;Welcome to Nhost’s &lt;strong&gt;first-ever&lt;/strong&gt; launch week!&lt;/p&gt;

&lt;p&gt;Today we’re excited to announce that all new projects get their own dedicated Postgres instance with root access. It's finally possible to connect directly to the database with your favorite Postgres client.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;When we launched Nhost v2, all databases were hosted and managed on Amazon RDS. The reason why we started with RDS was twofold:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;having the most crucial component of all infrastructure managed and scaled by an experienced team on a mature product seemed to be an excellent idea. We wouldn’t have to manage and operate it ourselves.&lt;/li&gt;
&lt;li&gt;with v2, all services (e.g., GraphQL, Authentication, and Storage) were moved to Kubernetes because of its flexibility and extensibility. Running a stateful component like Postgres on Kubernetes comes with a complete set of challenges of their own and we wanted to focus on running the stateless components well.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Kubernetes is a complex piece of technology to master but once you do it, it gives infrastructure teams superpowers. All projects running on Nhost have the option to scale vertically (adding resources to existing instances) and horizontally (adding new instances/replicas) on each service individually (GraphQL, Auth, Storage, and now Postgres, but here only vertically). This means your projects can cope with the load of your application, whether sustained or due to spikes in demand while also providing high availability of your products if the underlying infrastructure is misbehaving or faulty. If a node goes down, your services are almost instantly moved to a healthy one. This is the reason why we were able to easily cope with 2M+ requests in less than 24h when &lt;a href="https://nhost.io/blog/how-nhost-took-midnight-society-from-mock-up-to-a-400000-user-launch-in-just-6-weeks"&gt;Midnight Society&lt;/a&gt; launched - it just worked without any manual work from us.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The RDS setup comprised a big, database-optimized instance in every region we operate. One instance would hold multiple databases for multiple projects.&lt;/p&gt;

&lt;p&gt;We quickly realized that running a multi-tenant database offering on RDS would be problematic because of resource contention and the noisy neighbor effect. The noisy neighbor issue occurs when an application uses the majority of available resources and causes network performance issues for others on the shared infrastructure. A complex query and the absence of an index could decrease the performance on the entire instance and affect not only the offending application but others on the same instance as well.&lt;/p&gt;

&lt;p&gt;Although we were able to mitigate this issue by scaling the instances vertically (CPU, memory) and horizontally (scale out / more instances per region), it became painfully clear it wasn’t a definitive solution and that we were not fixing the fundamental problem.&lt;/p&gt;

&lt;p&gt;Other, smaller but relevant issues that made us switch were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RDS for PostgreSQL is not really raw PostgreSQL and it misses some of its flexibility (e.g., &lt;code&gt;postgres&lt;/code&gt; is not a superuser)&lt;/li&gt;
&lt;li&gt;The set of extensions available is very limited and one cannot change it&lt;/li&gt;
&lt;li&gt;No easy way to give users direct access to their databases with the &lt;code&gt;postgres&lt;/code&gt; user (this was a highly requested feature)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  PostgreSQL running on Kubernetes
&lt;/h2&gt;

&lt;p&gt;After discussing the topic of running stateful workloads on Kubernetes with a couple of industry experts and hearing about some awesome database companies (PlanetScale and Crunchy Data) already doing so, we finally dove in and took the time to research and experiment.&lt;/p&gt;

&lt;p&gt;This was a considerable amount of work that required involving the entire team; researching existing solutions to deploy Postgres in Kubernetes, ensuring we could scale the database according to our user's needs and, of course, adapting our internal systems to provision, operate, and scale our users' databases. In addition, we built a one-click process that will be added to the dashboard soon so you can migrate your existing projects from RDS to a dedicated Postgres at your own convenience.&lt;/p&gt;

&lt;p&gt;After testing the new setup internally for a few months we launched a private beta with 20 users a couple of months ago. During that period we gathered useful feedback, fixed a couple of issues, and, most notably, heard from most of the users that they were seeing performance improvements.&lt;/p&gt;

&lt;p&gt;All in all, we are extremely happy with the result. It is a top priority for us to provide a stable, performant, scalable, and resilient platform so you can build your projects with us and forget about the infrastructure and its operational needs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It is important to mention that we have the ability to use external PostgreSQL providers if required. If your application has special requirements due to compliance, multi-region needs, or you just happen to like any of those cool database companies out there we can accommodate and connect your application to the database of your choosing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What does this mean to you?
&lt;/h2&gt;

&lt;p&gt;As mentioned, the overall stability and performance gains are the most important reasons why we are now giving individual instances to everyone, but there are a few other points I would like to mention:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You now own the full PostgreSQL instance. When creating a project, you will be asked for a password for the &lt;code&gt;postgres&lt;/code&gt; superuser - you can use any Postgres client to connect directly to your database using the connection string. Be careful, with great power comes great responsibility.&lt;/li&gt;
&lt;li&gt;You are now able to install the extensions you need as long as we support them. We will be continuously adding new extensions and will make sure to listen to you on which ones we should prioritize.&lt;/li&gt;
&lt;li&gt;You will soon be able to scale up your database and give it as many resources as needed (CPU and memory).&lt;/li&gt;
&lt;li&gt;The Hasura GraphQL engine runs alongside your Postgres database, meaning there is little latency to your requests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Mvy7aLAT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://nhost.io/blog/2022-09-26-individual-postgres-instances/connection-string.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Mvy7aLAT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://nhost.io/blog/2022-09-26-individual-postgres-instances/connection-string.png" alt="Connection String" width="880" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;We are really excited not only about the stability we are able to provide but also about the world of possibilities brought by moving our PostgreSQL offering to Kubernetes. We now have the right foundation in place to look into other features like read replicas or multi-region deployments. Building robust and highly scalable applications should be &lt;strong&gt;fun&lt;/strong&gt;, &lt;strong&gt;fast&lt;/strong&gt;, and &lt;strong&gt;easy&lt;/strong&gt; for everyone. Let us take care of the hard and boring stuff!&lt;/p&gt;

&lt;p&gt;P.S: If you like what we are doing, please support our work by giving us a star on &lt;a href="https://github.com/nhost/nhost"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Launching Nhost CDN: Nhost Storage Is Now Blazing Fast™</title>
      <dc:creator>Johan Eliasson</dc:creator>
      <pubDate>Wed, 29 Jun 2022 06:33:45 +0000</pubDate>
      <link>https://dev.to/nhost/launching-nhost-cdn-nhost-storage-is-now-blazing-fast-39jh</link>
      <guid>https://dev.to/nhost/launching-nhost-cdn-nhost-storage-is-now-blazing-fast-39jh</guid>
      <description>&lt;p&gt;Today we're launching Nhost CDN to make Nhost Storage blazing fast™.&lt;/p&gt;

&lt;p&gt;Nhost CDN can serve files up to &lt;strong&gt;104x faster&lt;/strong&gt; than before so you can deliver an amazing experience for your users.&lt;/p&gt;

&lt;p&gt;To achieve this incredible speed, we're using a global network of edge computers on a tier 1 transit network with only solid-state drive (SSD) powered servers. Nhost CDN is live for all projects on Nhost, starting today.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XmOH9RVM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v3rmisq5qh6y9w2pzvzc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XmOH9RVM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v3rmisq5qh6y9w2pzvzc.png" alt="Nhost CDN Locations" width="880" height="526"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog post we'll go through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to start using Nhost CDN?&lt;/li&gt;
&lt;li&gt;What is a CDN?&lt;/li&gt;
&lt;li&gt;How we built Nhost CDN and what challenges we faced.&lt;/li&gt;
&lt;li&gt;Benefits you will notice with Nhost CDN.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before we kick off, Nhost Storage is built on &lt;a href="http://github.com/nhost/hasura-storage"&gt;Hasura Storage&lt;/a&gt; which is &lt;a href="https://nhost.io/blog/hasura-storage-in-go-5x-performance-increase-and-40-percent-less-ram"&gt;impressively fast&lt;/a&gt; already. With today's launch of Nhost CDN, it's even faster!&lt;/p&gt;

&lt;h2&gt;
  
  
  How to start using Nhost CDN?
&lt;/h2&gt;

&lt;p&gt;Upgrade to the latest &lt;a href="https://docs.nhost.io/reference/javascript"&gt;Nhost JavaScript SDK&lt;/a&gt; version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @nhost/nhost-js@latest
&lt;span class="c"&gt;# or yarn&lt;/span&gt;
yarn add @nhost/nhost-js@latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're using any of the &lt;a href="https://docs.nhost.io/reference/react"&gt;React&lt;/a&gt;, &lt;a href="https://docs.nhost.io/reference/nextjs"&gt;Next.js&lt;/a&gt; or &lt;a href="https://docs.nhost.io/reference/vue"&gt;Vue&lt;/a&gt; SDKs, make sure you update them to the latest version instead.&lt;/p&gt;

&lt;p&gt;Then, initialize the Nhost Client using &lt;code&gt;subdomain&lt;/code&gt; and &lt;code&gt;region&lt;/code&gt; instead of &lt;code&gt;backendUrl&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;NhostClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@nhost/nhost-js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;nhost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;NhostClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;subdomain&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;your-subdomain&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;your-region&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You find the &lt;code&gt;subdomain&lt;/code&gt; and &lt;code&gt;region&lt;/code&gt; of your Nhost project in the &lt;a href="https://app.nhost.io"&gt;Nhost dashboard&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Locally, you use &lt;code&gt;subdomain: 'localhost'&lt;/code&gt;. Like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;NhostClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@nhost/nhost-js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;nhost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;NhostClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;subdomain&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;localhost&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Everything else works as before. You can now enjoy extreme speed with the Nhost CDN serving your files.&lt;/p&gt;

&lt;p&gt;Keep reading to learn what a CDN is, what technical challenges we faced, and the incredible performance improvements Nhost CDN brings to your users.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a CDN?
&lt;/h2&gt;

&lt;p&gt;Before we start diving into technical details and fancy numbers let's briefly talk about what CDNs are and why they are important.&lt;/p&gt;

&lt;p&gt;CDN stands for "Content Delivery Network", roughly speaking they are highly distributed caches with lots of bandwidth and located very close to where users live. They can help online services and applications serve content to users by storing copies of it where they are most needed so users don't need to reach the origin. For instance, if your origin is in Frankfurt but users are coming from India or Singapore, the CDN can store copies of your content in caches in those locations and save users the trouble of having to reach Frankfurt for that content. If done properly this has many benefits both for users and for the people responsible for the online services:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;From a user perspective&lt;/strong&gt;: Users will experience less latency because they don't need to reach all the way to Frankfurt to get the content. Instead, they can fetch the content from the local cache in their region. This is even more important in regions where connectivity may not be as good and where packet losses or bottlenecks between service providers are common.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;From an application developer perspective:&lt;/strong&gt; Each request served from a cache is a request that didn't need to reach your origin. This will lower your infrastructure costs as you have to serve fewer requests and, thus, lower your CPU, RAM, and network usage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before dropping this topic let's see a quick example, imagine the following scenario:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TldxAhfx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://nhost.io/blog/2022-06-22-launching-nhost-cdn-nhost-storage-is-now-blazing-fast/cdn-explained.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TldxAhfx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://nhost.io/blog/2022-06-22-launching-nhost-cdn-nhost-storage-is-now-blazing-fast/cdn-explained.png" alt="CDN Explained" width="880" height="1193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the example above, Pratim and Nestor are clients while Nuno is the CDN. In a faraway land, we have our origin, Johan.&lt;/p&gt;

&lt;p&gt;When Pratim first asks Nuno about the meaning of life he doesn't know it so he asks Johan about it. When Johan responds Nuno stores a copy of the response and sends it to Pratim.&lt;/p&gt;

&lt;p&gt;Later, when Nestor asks Nuno about the meaning of life he already has a copy of the response so Nuno can send it to Nestor right away, reducing latency and saving Johan the trouble of having to respond to the same query again.&lt;/p&gt;

&lt;p&gt;This is great but it comes with some challenges. As a continuation, we will talk about some of those, how we are taking care of them for you in our integration with Nhost Storage, and some performance metrics you may see thanks to this integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cache invalidation
&lt;/h2&gt;

&lt;p&gt;As we mentioned previously, CDNs will store copies of your origin responses and serve them directly to users when available. However, things change, so you may need to tell the CDN that the copy of a response is no longer up to date and they need to remove it from their caches. This process is called “cache invalidation” or “purging”.&lt;/p&gt;

&lt;p&gt;In the case of Nhost Storage cache-invalidation is handled automatically for you. Every time a file is deleted or changed we instruct the CDN to invalidate the cache for that particular object.&lt;/p&gt;

&lt;p&gt;However, this isn't as easy as it sounds as Nhost Storage not only serves static files, it can also manipulate images (i.e. generate thumbnails from an image) and/or generate presigned-urls. This means that for a given file in Nhost Storage there may be multiple versions of the same object that are cached in the CDN. If you don't invalidate them all you may still serve files that were deleted or, worse, the wrong version of an object.&lt;/p&gt;

&lt;p&gt;To solve this issue we attach to each response a special header &lt;code&gt;Surrogate-Key&lt;/code&gt; with the &lt;code&gt;fileID&lt;/code&gt; of the object being served. This means that it doesn't matter if you are serving the original image, a thumbnail, or a presigned url of it, they all will share the same &lt;code&gt;Surrogate-Key&lt;/code&gt;. When Nhost Storage needs to invalidate a file what it needs to do is instruct the CDN to invalidate all copies of responses with a given &lt;code&gt;Surrogate-Key&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security
&lt;/h2&gt;

&lt;p&gt;At this point, you may be considering the security implications of this. What happens if a file is private? Does this mean the CDN will serve the stored copy of it to anyone that requests it or does it mean this is only useful for public files? Well, I am glad you asked. The short answer is that you don't have to worry, you can still benefit from the CDN while keeping your files private.&lt;/p&gt;

&lt;p&gt;The longer answer is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the CDN we flag cached content that required some form of the authorization header&lt;/li&gt;
&lt;li&gt;When a user requests content that was flagged as private we perform a conditional request from the CDN to the origin. The conditional request will authenticate the request and return a 304 if it succeeds.&lt;/li&gt;
&lt;li&gt;The CDN will only serve the cached object to the user if the conditional request succeeded.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Even though you still need a round trip to the origin to perform the authentication of the user, you can benefit from the CDN as your request to the origin is very lightweight (just a few bytes with headers going back and forth), and the file will still be served from the CDN cache. You can see below an example of two users requesting the same file:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The cache is empty, CDN requests the file and stores it, total request time from the origin perspective is 5.15s:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"2022-06-16T12:16:28Z"&lt;/span&gt; &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;client_ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.128.78.244 &lt;span class="nv"&gt;errors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"[]"&lt;/span&gt; &lt;span class="nv"&gt;latency_time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5.157454279s &lt;span class="nv"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;GET &lt;span class="nv"&gt;status_code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;206 &lt;span class="nv"&gt;url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/v1/files/1ff8ef8d-3240-4cf3-805f-fc3d61d190b2                                           │
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Cache has the object already cached but flagged as private so it makes a conditional request to authenticate the user. Total request time from the origin perspective is 218.28ms (after the 304 the actual file is served directly from the CDN without origin interaction):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"2022-06-16T12:16:41Z"&lt;/span&gt; &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;info &lt;span class="nv"&gt;client_ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.128.78.244 &lt;span class="nv"&gt;errors&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"[]"&lt;/span&gt; &lt;span class="nv"&gt;latency_time&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;218.283899ms &lt;span class="nv"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;GET &lt;span class="nv"&gt;status_code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;304 &lt;span class="nv"&gt;url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/v1/files/1ff8ef8d-3240-4cf3-805f-fc3d61d190b2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Serving Large Files
&lt;/h2&gt;

&lt;p&gt;Serving large files pose two interesting challenges:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How do you cache large files efficiently?&lt;/li&gt;
&lt;li&gt;How do you cache partial content if a connection drops?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These two challenges are related and have a common solution. For instance, imagine you have a 1GB file in your storage and a user starts downloading it, however, the connection drops when the user has downloaded 750MB. What happens when the next user arrives? Do you have to start over? If the file is downloaded fully, do you keep the entire file in the cache?&lt;/p&gt;

&lt;p&gt;To support these use cases Nhost Storage supports the &lt;code&gt;Range&lt;/code&gt; header. This header allows you to tell the origin you want to retrieve only a chunk of the file. For instance, by setting the header &lt;code&gt;Range: bytes=0-1024&lt;/code&gt; you'd be instructing Nhost Storage to send you only the first 1024 bytes of a file.&lt;/p&gt;

&lt;p&gt;In the CDN we leverage this feature to download large files in chunks of 10MB. This way if a connection drops we can store these chunks and serve them later on when a user requests the same file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tweaking TCP Parameters
&lt;/h2&gt;

&lt;p&gt;Another optimization we can do in the CDN platform is to tweak the TCP parameters. For instance, we can increase the &lt;a href="https://en.wikipedia.org/wiki/TCP_congestion_control#Congestion_window"&gt;congestion window&lt;/a&gt;, which is particularly useful when the latency is high. Thanks to this we can improve download times even when the file isn't cached already.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shielding
&lt;/h2&gt;

&lt;p&gt;We mentioned that caches are located close to users, which means that the cache that a user in Cape Town would utilize isn't the same as a user in Paris would. A direct implication of this is that a user in Paris can't benefit from content cached in another location.&lt;/p&gt;

&lt;p&gt;This is true up to a certain extent. We utilize a technique called “shielding” which allows us to use a location close to the origin as a sort of “global” cache. With shielding, a cache that doesn't have a copy of the file that is needed will query the shield location instead of the origin. This way you can still reduce the load of your origin and improve your users' experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wp_1HTmP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://nhost.io/blog/2022-06-22-launching-nhost-cdn-nhost-storage-is-now-blazing-fast/sheilding-explained.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wp_1HTmP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://nhost.io/blog/2022-06-22-launching-nhost-cdn-nhost-storage-is-now-blazing-fast/sheilding-explained.png" alt="Sheilding Explained" width="880" height="776"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Performance Metrics
&lt;/h1&gt;

&lt;p&gt;To showcase our CDN integration we are going to perform three simple tests:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We are going to download a public image (~150kB)&lt;/li&gt;
&lt;li&gt;We are going to download a private image (~150kB)&lt;/li&gt;
&lt;li&gt;We are going to download a private large file (45MB)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To make things more interesting we are going to deploy a Nhost app in Singapore while the client is going to be located in Stockholm, Sweden, adding to a latency of ~200ms.&lt;/p&gt;

&lt;p&gt;As you can see in the graph below even when the content isn't cached (miss), we experience a significant improvement in download times; downloading the images is done in less than half the time, and downloading the large file takes 30% less time. This is thanks to the TCP tweaks we can apply to the CDN platform&lt;/p&gt;

&lt;p&gt;Improvements are more dramatic when the object is already cached, then we see we can get the public image in just 21ms compared to the 2.19s that took to get the file directly from Nhost Storage. Downloading the private image goes down from 2.07s to 403ms, which makes sense as the latency is ~200ms and we need to go back and forth to the origin to ask it to authenticate the user and get back the response before we can serve the object.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XCWPjLJR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://nhost.io/blog/2022-06-22-launching-nhost-cdn-nhost-storage-is-now-blazing-fast/performance-metrics.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XCWPjLJR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://nhost.io/blog/2022-06-22-launching-nhost-cdn-nhost-storage-is-now-blazing-fast/performance-metrics.png" alt="CDN Performance Metrics" width="802" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Did you build a CDN network?
&lt;/h1&gt;

&lt;p&gt;No, we didn't. We are leveraging &lt;a href="https://fastly.com"&gt;Fastly&lt;/a&gt;'s expertise for that so you get to benefit from their &lt;a href="https://www.fastly.com/network-map"&gt;large infrastructure&lt;/a&gt; while we get to enjoy their high degree of flexibility to tailor the service to your needs.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Integrating a CDN with a service like Nhost Storage isn't an easy task but by doing so we have increased all metrics allowing you to serve content faster and giving your users a better experience when using your services no matter where your users are.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Hasura Storage in Go: 5x performance increase and 40% less RAM</title>
      <dc:creator>Johan Eliasson</dc:creator>
      <pubDate>Sun, 29 May 2022 06:53:04 +0000</pubDate>
      <link>https://dev.to/nhost/hasura-storage-in-go-5x-performance-increase-and-40-less-ram-5g6a</link>
      <guid>https://dev.to/nhost/hasura-storage-in-go-5x-performance-increase-and-40-less-ram-5g6a</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/nhost/hasura-storage"&gt;Hasura Storage&lt;/a&gt; is an open source service that bridges any S3-compatible cloud storage service with Hasura and it is the service we, at Nhost, use to provide storage capabilities to our users.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C5OD--Wf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4mvwl77717clx9xlpy5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C5OD--Wf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4mvwl77717clx9xlpy5.png" alt="Files browser in the Nhost Console" width="880" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Its objective is to allow users to combine the features they love about Hasura (permissions, events, actions, presets, etc.) with the convenience of being able to show files online.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tDgga60i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwwx5b1jkdxsd99w0i5o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tDgga60i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwwx5b1jkdxsd99w0i5o.png" alt="Setting permissions to allow users to upload files to the bucket  raw `profile-pics` endraw  and presetting the value  raw `uploaded_by_user_id` endraw " width="880" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8uSwA3th--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3lgcpl9xa5abqu61iy7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8uSwA3th--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3lgcpl9xa5abqu61iy7m.png" alt="Allow users to only read files that were uploaded by themselves" width="880" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7tDhqPA1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29oz39fmy5ey9etvy4w7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7tDhqPA1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/29oz39fmy5ey9etvy4w7.png" alt="Calling a webhook every time a new file is uploaded" width="880" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The service, written in Node.js, has served us well for quite some time but as the company grew and the number of users increased performance at scale started being a concern, while Node.js may be great for many reasons, performance and scalability aren't one of them.&lt;/p&gt;

&lt;p&gt;For those short on time, the goal of this blog post is to showcase the gains we incurred across all metrics by rewriting a Node.js microservice in Golang. Gains that include a &lt;strong&gt;5x increase in the number of requests served while halving memory consumption&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deciding to rewrite the service
&lt;/h2&gt;

&lt;p&gt;As the need to scale became more important we decided to rewrite the service in go. The reasons behind Golang were many:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Its dependency management system and build system make it a perfect fit for the cloud&lt;/li&gt;
&lt;li&gt;Nhost team had plenty of experience with Golang&lt;/li&gt;
&lt;li&gt;Even though it is a very verbose language, especially compared to Node.js, it is very easy to learn and fast to write&lt;/li&gt;
&lt;li&gt;It is known to be very performant&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you are interested in learning more about the language and its promises ACM has a &lt;a href="https://cacm.acm.org/magazines/2022/5/260357-the-go-programming-language-and-environment/fulltext"&gt;good article about it&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rewriting the service
&lt;/h2&gt;

&lt;p&gt;The actual rewrite was quite uneventful. Writing microservices like this is a well-known problem and, while the service is very useful and convenient, it doesn't perform anything too complex. Hasura-storage's innovation and usefulness come from bridging two great services that our users love; s3 and Hasura, not from doing anything whimsical.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarking the service
&lt;/h2&gt;

&lt;p&gt;When the rewrite was completed we decided to run some benchmarks against both the Node.js and Golang versions of the service. To do so we used &lt;a href="https://k6.io/"&gt;k6&lt;/a&gt; and designed the following test:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When a test starts it ramps up its number of workers from 1 to TARGET during the first 10 seconds&lt;/li&gt;
&lt;li&gt;Then it runs for 60 seconds more before winding down.&lt;/li&gt;
&lt;li&gt;Workers query the service as fast as possible&lt;/li&gt;
&lt;li&gt;We run the following tests:

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;download_small_file&lt;/code&gt; (100 workers) - Download a 100KB file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;download_medium_file&lt;/code&gt; (100 workers) - Download a 5MB file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;download_large_file&lt;/code&gt; (50 workers) - Download a 45 MB file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;download_image&lt;/code&gt; (100 workers) - Download a 5.3 MB image&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;download_image_manipulated&lt;/code&gt; (10 workers) - Download the same image but resize the image and apply some blur on the fly&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;CPU was limited to 10% of the overall system&lt;/li&gt;
&lt;li&gt;RAM was unlimited&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before seeing the conclusions I want to clarify the numbers we are going to see shouldn't be taken at face value, the system used for the benchmark had its CPU allowance quite limited as we wanted to stress both services and see how they behaved under pressure so, what we are interested in isn't the raw numbers, but the difference between the two versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Number of requests
&lt;/h2&gt;

&lt;p&gt;We are going to start by looking at the number of requests as this is the main metric that will dictate if the other metrics make sense or not (i.e. decreasing RAM while serving fewer requests might not be something desirable).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--o0rUSiFk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ezckaptu0s7rmkj4mrhu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--o0rUSiFk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ezckaptu0s7rmkj4mrhu.png" alt="Image description" width="730" height="890"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see the number of requests we were able to serve under each scenario improved substantially, especially for smaller files (5x)&lt;/p&gt;

&lt;h3&gt;
  
  
  RAM consumption
&lt;/h3&gt;

&lt;p&gt;RAM is a limited resource and it is not easy to throttle it if a system is reaching its limits. Traditional systems have relied on swapping to disk but this has a dramatic impact on overall performance so it is not an option in modern systems. Instead, modern systems rely on restarting the service when a threshold is reached. It is for this reason that peak memory usage under different scenarios is important, if you reach a certain value your service is restarted, if the service is restarted, it can't serve requests. Below you can see peak usage under the different scenarios described above:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c44La38k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t3sd5p2dz1oyomx4gqt3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c44La38k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t3sd5p2dz1oyomx4gqt3.png" alt="Image description" width="730" height="890"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see we managed to improve considerably this metric under all scenarios, especially when downloading large files. If you keep in mind that we were also serving up to 5x more requests this is a very good result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Response times
&lt;/h2&gt;

&lt;p&gt;Another important metric is response time, here we are looking at two units; minimum response time, which will tell us what is the response when the system is not under pressure, and the P95 which will tell us what was at most the response time for most users (including when the system was under pressure).&lt;/p&gt;

&lt;p&gt;Let's start by looking at the minimum response time:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fk-e_MiP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73zagomzxwn37uurinuu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fk-e_MiP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/73zagomzxwn37uurinuu.png" alt="Image description" width="720" height="888"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is hard to see in the test case &lt;code&gt;download_small_file&lt;/code&gt; but we improved the response time in that scenario from 29ms in the Node.js case to 7ms in the Golang case. This is a 4x improvement we see across the rest of the scenarios except &lt;code&gt;download_image_manipulated&lt;/code&gt;, where we see around a 2x improvement. (we will talk about this scenario later).&lt;/p&gt;

&lt;p&gt;And now let's look at the P95&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qQbjuKik--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yupfqxo3eq5uc1ks4gjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qQbjuKik--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yupfqxo3eq5uc1ks4gjg.png" alt="Image description" width="772" height="890"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here we also see a 4x improvement for most cases with the exception of &lt;code&gt;download_image_manipulated&lt;/code&gt; or &lt;code&gt;download_large_file&lt;/code&gt; where we see substantial improvements but not as dramatic as the rest. This makes sense as downloading large files is going to be I/O NET bound while manipulating images is going to be CPU bound but even then we are happy to see this substantial improvement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Manipulating images
&lt;/h3&gt;

&lt;p&gt;I wanted to single out the case &lt;code&gt;download_image_manipulated&lt;/code&gt; as it is an interesting case. For performance reasons, both versions of Hasura Storage rely on a C library called &lt;a href="https://www.libvips.org"&gt;libvips&lt;/a&gt;, this is the reason why Node.js is performing quite nicely here despite the CPU limitations we introduced. However, it is nice to see that even realizing both services are using the same underlying C library we managed to improve all metrics significantly&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the service to production
&lt;/h2&gt;

&lt;p&gt;After the service was rewritten and tested we deployed the service to production. As soon as it was deployed we could see the benefits almost immediately. Below you can see RAM usage in one of the nodes of our cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--frK0CDf2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/abe6cumcta4egv8auk94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--frK0CDf2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/abe6cumcta4egv8auk94.png" alt="Image description" width="880" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see we reduced our memory footprint by almost 40%, a significant improvement that will let us serve more users and traffic without increasing our overall infrastructure bill.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We decided to rewrite &lt;a href="https://github.com/nhost/hasura-storage"&gt;the service&lt;/a&gt; to improve performance metrics and after benchmarking both services side by side we can unequivocally claim that we managed to improve all metrics significantly. We expect to be able to serve more requests while utilizing fewer resources and while also improving the response times for our users which I am sure they will appreciate.&lt;/p&gt;

</description>
      <category>node</category>
      <category>go</category>
      <category>opensource</category>
      <category>hasura</category>
    </item>
    <item>
      <title>How to Add Authentication to Hasura</title>
      <dc:creator>Johan Eliasson</dc:creator>
      <pubDate>Mon, 21 Feb 2022 09:45:18 +0000</pubDate>
      <link>https://dev.to/nhost/how-to-add-authentication-to-hasura-5529</link>
      <guid>https://dev.to/nhost/how-to-add-authentication-to-hasura-5529</guid>
      <description>&lt;p&gt;Hasura and GraphQL are amazing, but setting up authentication to work with Hasura can be difficult.&lt;/p&gt;

&lt;p&gt;In this article you’ll learn &lt;strong&gt;how to add authentication to Hasura&lt;/strong&gt;, so you can sign in users and start using Hasura permissions in your GraphQL API.&lt;/p&gt;

&lt;p&gt;Before we continue, let’s just set the context.&lt;/p&gt;

&lt;p&gt;Hasura has &lt;strong&gt;Authorization&lt;/strong&gt; (which is different from &lt;em&gt;authentication&lt;/em&gt;) built in to handle permissions and access control for your GraphQL API.&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;Authentication&lt;/strong&gt;, which handles users, sign-in flow, tokens, etc, ****is not handled by Hasura. Instead, you need to have your own authentication service that handles all that.&lt;/p&gt;

&lt;p&gt;Building such a service is not easy.&lt;/p&gt;

&lt;p&gt;But we have a solution for you: Hasura Auth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hasura Auth is an open-source&lt;/strong&gt; service to handle authentication with Hasura. With Hasura Auth you can sign in users and manage roles. Hasura Auth works specifically for Hasura and its permission system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To get started quickly&lt;/strong&gt; with authentication for Hasura we recommend creating a &lt;a href="http://nhost.io/" rel="noopener noreferrer"&gt;Nhost account&lt;/a&gt; and creating a &lt;a href="http://nhost.io/" rel="noopener noreferrer"&gt;Nhost app&lt;/a&gt;. At Nhost, we automatically provision a backend for you with Postgres, Hasura, and Hasura Auth. No need worrying about configuration or infrastructure. Instead you can focus on building your app and provide value for your users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hasura Auth is Open Source
&lt;/h2&gt;

&lt;p&gt;Hasura Auth is fully open source and is specifically created to handle authentication for Hasura. It’s built with TypeScript.&lt;/p&gt;

&lt;p&gt;The code is available at &lt;a href="https://github.com/nhost/hasura-auth" rel="noopener noreferrer"&gt;https://github.com/nhost/hasura-auth&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5garcnjl2qmj7tdnzbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5garcnjl2qmj7tdnzbk.png" alt="Hasura Auth on GitHub"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Users in Your Database
&lt;/h2&gt;

&lt;p&gt;Hasura Auth stores your users in your database. Hasura Auth uses its own &lt;code&gt;auth&lt;/code&gt; schema with tables related to authentication, such as &lt;code&gt;auth.users&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Users are stored in the &lt;code&gt;auth.users&lt;/code&gt; table. If the user signs in with an email and password the password is hashed using &lt;a href="https://en.wikipedia.org/wiki/Bcrypt" rel="noopener noreferrer"&gt;bcrypt&lt;/a&gt;. However, Hasura Auth has support for even more sign-in methods, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Email and Password&lt;/li&gt;
&lt;li&gt;Magic Link&lt;/li&gt;
&lt;li&gt;SMS&lt;/li&gt;
&lt;li&gt;Social Providers such as Facebook, Google, GitHub, and many more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Features of Hasura Auth
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;🧑‍🤝‍🧑 Users are stored in Postgres and accessed via GraphQL&lt;/li&gt;
&lt;li&gt;🔑 Multiple sign-in methods.&lt;/li&gt;
&lt;li&gt;✨ Integrates with GraphQL and Hasura Permissions&lt;/li&gt;
&lt;li&gt;🔐 JWT tokens and Refresh Tokens.&lt;/li&gt;
&lt;li&gt;✉️ Emails sent on various operations&lt;/li&gt;
&lt;li&gt;✅ Optional checking for Pwned Passwords.&lt;/li&gt;
&lt;li&gt;👨‍💻 Written 100% in TypeScript.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Easy to use JavaScript SDK
&lt;/h2&gt;

&lt;p&gt;Hasura Auth is part of the Nhost backend stack, and Nhost comes with an easy-to-use JavaScript SDK written in TypeScript.&lt;/p&gt;

&lt;p&gt;This is how you use the Nhost JavaScript client.&lt;/p&gt;

&lt;p&gt;First initialize the Nhost client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;NhostClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@nhost/nhost-js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;nhost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;NhostClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;backendUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;nhost-backend-url&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then sign up a new user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;nhost&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;auth&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;signUp&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;elon@musk.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;spacex-to-mars&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Elon Musk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it. It’s that simple!&lt;/p&gt;

&lt;p&gt;Here’s a demo with authentication with Nhost and our JavaScript client:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/v7xRy3OXyjE"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Read the full &lt;a href="https://docs.nhost.io/reference/sdk/authentication" rel="noopener noreferrer"&gt;JavaScript SDK documentation&lt;/a&gt; if you want to learn more.&lt;/p&gt;

&lt;p&gt;You can also use the JavaScript client together with &lt;a href="https://docs.nhost.io/reference/supporting-libraries/react-auth" rel="noopener noreferrer"&gt;&lt;code&gt;@nhost/react-auth&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://docs.nhost.io/reference/supporting-libraries/react-apollo" rel="noopener noreferrer"&gt;&lt;code&gt;@nhost/react-apollo&lt;/code&gt;&lt;/a&gt; if you plan to use React and the Apollo GraphQL client. Support for more GraphQL clients are coming soon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get started
&lt;/h2&gt;

&lt;p&gt;You have two options to get started with Hasura Auth to add authentication to Hasura:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Nhost (recommended)&lt;/li&gt;
&lt;li&gt;Self-hosting&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Get started with Nhost (recommended)
&lt;/h3&gt;

&lt;p&gt;Create an account and create a new &lt;a href="http://nhost.io/" rel="noopener noreferrer"&gt;Nhost&lt;/a&gt; app. That’s it 🤯 (Yes, we try to make it easy to build apps).&lt;/p&gt;

&lt;p&gt;When you create a new Nhost app we manage all infrastructure and configuration so you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Postgres&lt;/li&gt;
&lt;li&gt;Hasura&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hasura Auth&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Hasura Storage&lt;/li&gt;
&lt;li&gt;Serverless Functions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a more detailed guide, read our &lt;a href="https://docs.nhost.io/get-started/quick-start" rel="noopener noreferrer"&gt;Quick start guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get started with self hosting
&lt;/h3&gt;

&lt;p&gt;Hasura Auth is open-source and available as a Docker image (&lt;code&gt;nhost/hasura-auth&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Here’s an example of how you can combine Postgres, Hasura, and Hasura Auth using Docker Compose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.6'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./docker/data/db:/var/lib/postgresql/data&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./docker/initdb.d:/docker-entrypoint-initdb.d:ro&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${POSTGRES_PASSWORD:-secretpgpassword}&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;5432:5432'&lt;/span&gt;
  &lt;span class="na"&gt;graphql-engine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hasura/graphql-engine:v2.1.1&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;HASURA_GRAPHQL_DATABASE_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres://postgres:${POSTGRES_PASSWORD:-secretpgpassword}@postgres:5432/postgres&lt;/span&gt;
      &lt;span class="na"&gt;HASURA_GRAPHQL_JWT_SECRET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${HASURA_GRAPHQL_JWT_SECRET}&lt;/span&gt;
      &lt;span class="na"&gt;HASURA_GRAPHQL_ADMIN_SECRET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${HASURA_GRAPHQL_ADMIN_SECRET}&lt;/span&gt;
      &lt;span class="na"&gt;HASURA_GRAPHQL_UNAUTHORIZED_ROLE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;public&lt;/span&gt;
      &lt;span class="na"&gt;HASURA_GRAPHQL_LOG_LEVEL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;debug&lt;/span&gt;
      &lt;span class="na"&gt;HASURA_GRAPHQL_ENABLE_CONSOLE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;true'&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8080:8080'&lt;/span&gt;
  &lt;span class="na"&gt;hasura-auth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nhost/hasura-auth:latest&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;graphql-engine&lt;/span&gt;
    &lt;span class="na"&gt;env_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.env&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;HASURA_GRAPHQL_DATABASE_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres://postgres:${POSTGRES_PASSWORD:-secretpgpassword}@postgres:5432/postgres&lt;/span&gt;
      &lt;span class="na"&gt;HASURA_GRAPHQL_GRAPHQL_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://graphql-engine:8080/v1/graphql&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;4000:4000'&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./docker/data/mailhog:/maildir&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this example, please use the &lt;code&gt;.env&lt;/code&gt; file &lt;a href="https://github.com/nhost/hasura-auth/blob/main/.env.example" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And start everything with: &lt;code&gt;docker-compose up -d&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We love open source at Nhost. That’s why we’ve open sourced &lt;a href="https://github.com/nhost/hasura-auth" rel="noopener noreferrer"&gt;Hasura Auth&lt;/a&gt; to make authentication easy with Hasura and make it available for everyone.&lt;/p&gt;

&lt;p&gt;Hasura Auth can be used with Nhost if you don’t want to manage infrastructure and configuration your self, or as a self hosted alternative with Docker.&lt;/p&gt;

&lt;p&gt;If you want to support our open source, please give us a start on GitHub: &lt;a href="https://github.com/nhost/nhost" rel="noopener noreferrer"&gt;https://github.com/nhost/nhost&lt;/a&gt;. It would mean a lot. Thanks!&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>typescript</category>
      <category>opensource</category>
      <category>database</category>
    </item>
    <item>
      <title>Nhost CLI: From Zero to Production</title>
      <dc:creator>Vadim Smirnov</dc:creator>
      <pubDate>Fri, 18 Feb 2022 13:20:58 +0000</pubDate>
      <link>https://dev.to/nhost/nhost-cli-from-zero-to-production-220h</link>
      <guid>https://dev.to/nhost/nhost-cli-from-zero-to-production-220h</guid>
      <description>&lt;p&gt;In the previous tutorials, that are covered in the &lt;a href="https://docs.nhost.io/get-started" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, we tested various parts of Nhost, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Database&lt;/li&gt;
&lt;li&gt;GraphQL API&lt;/li&gt;
&lt;li&gt;Permission&lt;/li&gt;
&lt;li&gt;JavaScript SDK&lt;/li&gt;
&lt;li&gt;Authentication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All changes we did to our database and API happened directly in the production of our Nhost app.&lt;/p&gt;

&lt;p&gt;It’s not ideal for making changes in production because you might break things, which will affect all users of your app.&lt;/p&gt;

&lt;p&gt;Instead, it’s recommended to make changes and test your app locally before deploying those changes to production.&lt;/p&gt;

&lt;p&gt;To do changes locally, we need to have a complete Nhost app running locally, which the Nhost CLI does.&lt;/p&gt;

&lt;p&gt;The Nhost CLI matches your production application in a local environment, this way you can make changes and test your code before deploying your changes to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommended workflow with Nhost
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Develop locally using the Nhost CLI.&lt;/li&gt;
&lt;li&gt;Push changes to GitHub.&lt;/li&gt;
&lt;li&gt;Nhost automatically applies changes to production.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What you’ll learn in this guide:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use the Nhost CLI to create a local environment&lt;/li&gt;
&lt;li&gt;Connect a GitHub repository with a Nhost app&lt;/li&gt;
&lt;li&gt;Deploy local changes to production&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup the recommended workflow with Nhost
&lt;/h2&gt;

&lt;p&gt;What follows is a detailed tutorial on how you setup Nhost for this workflow&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Nhost App
&lt;/h3&gt;

&lt;p&gt;Create a &lt;strong&gt;new Nhost app&lt;/strong&gt; for this tutorial.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s important that you create a &lt;strong&gt;new&lt;/strong&gt; Nhost app for this guide instead of reusing an old Nhost app because we want to start with a clean Nhost app.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87lkr21amfmvha0i5cdf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87lkr21amfmvha0i5cdf.png" alt="Create new app"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create new GitHub Repository
&lt;/h3&gt;

&lt;p&gt;Create a new GitHub repository for your new Nhost app. The repo can be both private or public.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28cq9yw951696ky0xzvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28cq9yw951696ky0xzvf.png" alt="Create new repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect GitHub Repository to Nhost App
&lt;/h2&gt;

&lt;p&gt;In the Nhost Console, go to the dashboard of your Nhost app and click &lt;strong&gt;Connect to GitHub&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf4sjy5ul7wixlcsyk98.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flf4sjy5ul7wixlcsyk98.gif" alt="Connect Github Repo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install the Nhost CLI
&lt;/h2&gt;

&lt;p&gt;Install the Nhost CLI using the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;

&lt;span class="n"&gt;sudo&lt;/span&gt; &lt;span class="n"&gt;curl&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;L&lt;/span&gt; &lt;span class="n"&gt;https&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;githubusercontent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;com&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;nhost&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;cli&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sh&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="n"&gt;bash&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Initialize a new Nhost App locally:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;

&lt;span class="n"&gt;nhost&lt;/span&gt; &lt;span class="n"&gt;init&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="nv"&gt;"nhost-example-app"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;cd&lt;/span&gt; &lt;span class="n"&gt;nhost&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;example&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And initialize the GitHub repository in the same folder:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"# nhost-example-app"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; README.md
git init
git add README.md
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"first commit"&lt;/span&gt;
git branch &lt;span class="nt"&gt;-M&lt;/span&gt; main
git remote add origin https://github.com/[github-username]/nhost-example-app.git
git push &lt;span class="nt"&gt;-u&lt;/span&gt; origin main


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now go back to the &lt;strong&gt;Nhost Console&lt;/strong&gt; and click &lt;strong&gt;Deployments&lt;/strong&gt;. You just made a new deployment to your Nhost app!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5a1vq5vdcd52vugvq9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5a1vq5vdcd52vugvq9f.png" alt="Deployments tab"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you click on the deployment you can see that nothing was really deployed. That’s because we just made a change to the README file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx59pqyd1s56m49gn33e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgx59pqyd1s56m49gn33e.png" alt="Deployments details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s do some backend changes!&lt;/p&gt;

&lt;h2&gt;
  
  
  Local changes
&lt;/h2&gt;

&lt;p&gt;Start Nhost locally:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;

&lt;span class="n"&gt;nhost&lt;/span&gt; &lt;span class="n"&gt;dev&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;💡 Make sure you have &lt;a href="https://www.docker.com/get-started" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; installed on your computer. It’s required for Nhost to work.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;nhost dev&lt;/code&gt; command will automatically start a complete Nhost environment locally on your computer using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Postgres&lt;/li&gt;
&lt;li&gt;Hasura&lt;/li&gt;
&lt;li&gt;Authentication&lt;/li&gt;
&lt;li&gt;Storage&lt;/li&gt;
&lt;li&gt;Serverless Functions&lt;/li&gt;
&lt;li&gt;Mailhog&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You use this local environment to do changes and testing before you deploy your changes to production.&lt;/p&gt;

&lt;p&gt;Running &lt;code&gt;nhost dev&lt;/code&gt; also starts the Hasura Console.&lt;/p&gt;

&lt;p&gt;💡 It’s important that you use the Hasura Console that is started automatically when you do changes. This way, changes are automatically tracked for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsw5x251plh9ix55hgs8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsw5x251plh9ix55hgs8r.png" alt="Hasura Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Hasura Console, create a new table &lt;code&gt;customers&lt;/code&gt; with two columns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;id&lt;/li&gt;
&lt;li&gt;name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq665cqg7tzunugmmmbn6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq665cqg7tzunugmmmbn6.gif" alt="Hasura Create Customers Table"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we created the &lt;code&gt;customers&lt;/code&gt; table there was also a migration created automatically. The migration was created at under &lt;code&gt;nhost/migrations/default&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-la&lt;/span&gt; nhost/migrations/default
total 0
drwxr-xr-x  3 eli  staff   96 Feb  7 16:19 &lt;span class="nb"&gt;.&lt;/span&gt;
drwxr-xr-x  3 eli  staff   96 Feb  7 16:19 ..
drwxr-xr-x  4 eli  staff  128 Feb  7 16:19 1644247179684_create_table_public_customers


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This database migration has only been applied locally, meaning, you created the &lt;code&gt;customers&lt;/code&gt; table locally but it does not (yet) exists in production.&lt;/p&gt;

&lt;p&gt;To apply the local change to production we need to commit the changes and push it to GitHub. Nhost will then automatically pick up the change in the repository and apply the changes.&lt;/p&gt;

&lt;p&gt;💡 You can commit and push files in another terminal while still having &lt;code&gt;nhost dev&lt;/code&gt; running.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;

&lt;span class="n"&gt;git&lt;/span&gt; &lt;span class="k"&gt;add&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;A&lt;/span&gt;
&lt;span class="n"&gt;git&lt;/span&gt; &lt;span class="k"&gt;commit&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="nv"&gt;"Initialized Nhost and added a customers table"&lt;/span&gt;
&lt;span class="n"&gt;git&lt;/span&gt; &lt;span class="n"&gt;push&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Head over to the &lt;strong&gt;Deployments&lt;/strong&gt; tab in the &lt;strong&gt;Nhost console&lt;/strong&gt; to see the deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n6r3j2n34wt2jl874jo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0n6r3j2n34wt2jl874jo.png" alt="Deployments tab after changes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the deployment finishes the &lt;code&gt;customers&lt;/code&gt; table is created in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7kre000hos2nyodpagk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7kre000hos2nyodpagk.png" alt="Customers table in Hasura Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’ve now completed the recommended workflow with Nhost:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Develop locally using the Nhost CLI.&lt;/li&gt;
&lt;li&gt;Push changes to GitHub.&lt;/li&gt;
&lt;li&gt;Nhost deploys changes to production.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Apply metadata and Serverless Functions
&lt;/h2&gt;

&lt;p&gt;In the previous section, we only created a new table; &lt;code&gt;customers&lt;/code&gt;. Using the CLI you can also do changes to other parts of your backend.&lt;/p&gt;

&lt;p&gt;There are three things the CLI and the GitHub integration track and applies to production:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Database migrations&lt;/li&gt;
&lt;li&gt;Hasura Metadata&lt;/li&gt;
&lt;li&gt;Serverless Functions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For this section, let’s do one change to the Hasura metadata and create one serverless function&lt;/p&gt;

&lt;h3&gt;
  
  
  Hasura Metadata
&lt;/h3&gt;

&lt;p&gt;We’ll add permissions to the &lt;code&gt;users&lt;/code&gt; table, making sure users can only see their own data. For this, go to the &lt;code&gt;auth&lt;/code&gt; schema and click on the &lt;code&gt;users&lt;/code&gt; table. then click on &lt;strong&gt;Permissions&lt;/strong&gt; and enter a new role &lt;strong&gt;user&lt;/strong&gt; and create a new &lt;strong&gt;select&lt;/strong&gt; permission for that role*&lt;em&gt;.&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;Create the permission &lt;strong&gt;with custom check&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"_eq"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"X-Hasura-User-Id"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Select the following columns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;id&lt;/li&gt;
&lt;li&gt;created_at&lt;/li&gt;
&lt;li&gt;display_name&lt;/li&gt;
&lt;li&gt;avatar_url&lt;/li&gt;
&lt;li&gt;email&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then click &lt;strong&gt;Save permissions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyvp0jolwp6qsrnc33pe.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyvp0jolwp6qsrnc33pe.gif" alt="Hasura User Permissions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s do a &lt;code&gt;git status&lt;/code&gt; again to confirm the permission changes we did were tracked locally in your git repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9j8tb7xj4m27va7p45he.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9j8tb7xj4m27va7p45he.png" alt="Git status"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can now commit this change:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git add &lt;span class="nt"&gt;-A&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"added permission for uses"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now let’s create a serverless function before we push all changes to GitHub so Nhost can deploy our changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless Function
&lt;/h3&gt;

&lt;p&gt;A serverless function is a piece of code written in JavaScript or TypeScript that takes an HTTP request and returns a response.&lt;/p&gt;

&lt;p&gt;Here’s an example:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

import &lt;span class="o"&gt;{&lt;/span&gt; Request, Response &lt;span class="o"&gt;}&lt;/span&gt; from &lt;span class="s1"&gt;'express'&lt;/span&gt;

&lt;span class="nb"&gt;export &lt;/span&gt;default &lt;span class="o"&gt;(&lt;/span&gt;req: Request, res: Response&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  res.status&lt;span class="o"&gt;(&lt;/span&gt;200&lt;span class="o"&gt;)&lt;/span&gt;.send&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;Hello &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.query.name&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Serverless functions are placed in the &lt;code&gt;functions/&lt;/code&gt; folder of your repository. Every file will become its own endpoint.&lt;/p&gt;

&lt;p&gt;Before we create our serverless function we’ll install &lt;code&gt;express&lt;/code&gt;, which is a requirement for serverless functions to work.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

npm &lt;span class="nb"&gt;install &lt;/span&gt;express
&lt;span class="c"&gt;# or with yarn&lt;/span&gt;
yarn add express


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We’ll use TypeScript so we’ll install two type definitions too:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; @types/node @types/express
&lt;span class="c"&gt;# or with yarn&lt;/span&gt;
yarn add &lt;span class="nt"&gt;-D&lt;/span&gt; @types/node @types/express


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Then we’ll create a file &lt;code&gt;functions/time.ts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In the file &lt;code&gt;time.ts&lt;/code&gt; we’ll add the following code to create our serverless function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

import &lt;span class="o"&gt;{&lt;/span&gt; Request, Response &lt;span class="o"&gt;}&lt;/span&gt; from &lt;span class="s1"&gt;'express'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nb"&gt;export &lt;/span&gt;default &lt;span class="o"&gt;(&lt;/span&gt;req: Request, res: Response&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;res
    .status&lt;span class="o"&gt;(&lt;/span&gt;200&lt;span class="o"&gt;)&lt;/span&gt;
    .send&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;Hello &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.query.name&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; It&lt;span class="s1"&gt;'s now: ${new Date().toUTCString()}`);
};


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We can now test the function locally. Locally, the backend URL is &lt;code&gt;http://localhost:1337&lt;/code&gt;. Functions are under &lt;code&gt;/v1/functions&lt;/code&gt;. And every function’s path and filename becomes an API endpoint.&lt;/p&gt;

&lt;p&gt;This means our function &lt;code&gt;functions/time.ts&lt;/code&gt; is at &lt;code&gt;http://localhost:1337/v1/functions/time&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let’s use curl to test our new function:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl http://localhost:1337/v1/functions/time
Hello undefined! It&lt;span class="s1"&gt;'s now: Sun, 06 Feb 2022 17:44:45 GMT


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And with a query parameter with our name:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl http://localhost:1337/v1/functions/time&lt;span class="se"&gt;\?&lt;/span&gt;name&lt;span class="se"&gt;\=&lt;/span&gt;Johan
Hello Johan! It&lt;span class="s1"&gt;'s now: Sun, 06 Feb 2022 17:44:48 GMT


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Again, let’s use &lt;code&gt;git status&lt;/code&gt; to see the changes we did to create our serverless function.&lt;/p&gt;

&lt;p&gt;Now let’s commit the changes and push them to GitHub.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

git add &lt;span class="nt"&gt;-A&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"added serverless function"&lt;/span&gt;
git push


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the Nhost Console, click on the new deployment to see details&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85lhjx2ldu760rchv66w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85lhjx2ldu760rchv66w.png" alt="Deployments details for function"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After Nhost has finished deploying your changes we can test them in production. First let’s confirm that the user permissions are applied.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0u08ql6hu7tzvrkai1j8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0u08ql6hu7tzvrkai1j8.png" alt="Hasura Console permissions table"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, let’s confirm that the serverless function was deployed. Again, we’ll use curl:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl https://your-backend-url.nhost.run/v1/functions/time&lt;span class="se"&gt;\?&lt;/span&gt;name&lt;span class="se"&gt;\=&lt;/span&gt;Johan


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5t6i3f4s8o92bowcqbir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5t6i3f4s8o92bowcqbir.png" alt="Serverless Function test"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we have installed the Nhost CLI and created a local Nhost environment to do local development and testing.&lt;/p&gt;

&lt;p&gt;In the local environment, we’ve made changes to our database, to Hasura’s metadata, and created a serverless function.&lt;/p&gt;

&lt;p&gt;We’ve connected a GitHub repository and pushed our changes to GitHub.&lt;/p&gt;

&lt;p&gt;We’ve seen Nhost automatically deploying our changes and we’ve verified that the changes were applied.&lt;/p&gt;

&lt;p&gt;In summary, we’ve set up a productive environment using the recommended Nhost workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Develop locally using the Nhost CLI.&lt;/li&gt;
&lt;li&gt;Push changes to GitHub.&lt;/li&gt;
&lt;li&gt;Nhost deploys changes to production.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In addition to all that, the Nhost team is always happy to support you with any questions you might have on &lt;a href="https://discord.gg/dHzgYs7c97" rel="noopener noreferrer"&gt;Discord&lt;/a&gt; and &lt;a href="https://github.com/nhost/nhost" rel="noopener noreferrer"&gt;Github&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>serverless</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Nhost v2 - The beginning of something big</title>
      <dc:creator>Johan Eliasson</dc:creator>
      <pubDate>Tue, 25 Jan 2022 17:16:07 +0000</pubDate>
      <link>https://dev.to/nhost/nhost-v2-the-beginning-of-something-big-1mc7</link>
      <guid>https://dev.to/nhost/nhost-v2-the-beginning-of-something-big-1mc7</guid>
      <description>&lt;p&gt;Today is a big day in the history of Nhost. After months of hard work, we're now launching Nhost v2 into public beta.&lt;/p&gt;

&lt;p&gt;For the last two years, we've been on a mission to build the backend for developers. It has taken us a little longer than we originally planned. This is not totally unheard of in IT projects apparently. We started out thinking we would only update a few things but as we dug deeper, the more we realized that if we wanted to be competitive we would eventually have to restructure both our offered services and our infrastructure.&lt;/p&gt;

&lt;p&gt;So, last summer, we decided to go all-in on a new version of Nhost with architecture and infrastructure that would enable us to scale our customer's apps. Both in terms of actual performance and in terms of developer productivity.&lt;/p&gt;

&lt;p&gt;Yes, we had to do some groundwork, and yes it would take some time. But it's done!&lt;/p&gt;

&lt;p&gt;We're now releasing Nhost v2 and we’ve never been in a better position to help developers build apps their users love.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Nhost?
&lt;/h2&gt;

&lt;p&gt;If this is the first time you’re reading about Nhost, here are three descriptions of what Nhost is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nhost is for the backend, what Netlify and Vercel is for the frontend.&lt;/li&gt;
&lt;li&gt;Nhost is an open-source backend to build apps users love.&lt;/li&gt;
&lt;li&gt;Nhost is a serverless backend for web and mobile apps.
Let's get back to what's new with Nhost v2. But first...&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's the same with Nhost v2?
&lt;/h2&gt;

&lt;p&gt;All Nhost apps (previously called projects) still consist of Postgres and Hasura. Two amazing open-source projects. We remain committed to offering an open data model which has no vendor lock-in for developers.&lt;/p&gt;

&lt;p&gt;Generally, we’re still pro,viding a pre-configured backend to build web and mobile apps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🐘 SQL Database&lt;/li&gt;
&lt;li&gt;🌐 GraphQL API&lt;/li&gt;
&lt;li&gt;🔒 Authentication&lt;/li&gt;
&lt;li&gt;🗄️ File Storage&lt;/li&gt;
&lt;li&gt;⚡ Serverless Functions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But we’ve made some updates on pricing, design, services, and infrastructure.&lt;/p&gt;

&lt;p&gt;Let’s start a new thing that many have waited for. A free tier!&lt;/p&gt;

&lt;h2&gt;
  
  
  What's new with Nhost v2?
&lt;/h2&gt;

&lt;p&gt;Rather watch me showing you what's new? Watch this video:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/5WTetOgDGLk"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  Free tier
&lt;/h3&gt;

&lt;p&gt;Nhost now comes with a free tier which is perfect for testing, building a side project, or for your next hackathon (something we'll do more of in the future).&lt;/p&gt;

&lt;p&gt;Once you're ready to go to production we have a new Pro plan. And once your app and business scale up we got you covered with our new infrastructure (more about that further down in the post).&lt;/p&gt;

&lt;h3&gt;
  
  
  Design
&lt;/h3&gt;

&lt;p&gt;Everything at Nhost has been redesigned. Even our logo had some color adjustments. This means new designs for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Landing page (&lt;a href="https://nhost.io"&gt;https://nhost.io&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Documentation (&lt;a href="https://docs.nhost.io"&gt;https://docs.nhost.io&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Dashboard (&lt;a href="https://app.nhost.io"&gt;https://app.nhost.io&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the new designs, we’re are better able to communicate our service and we’re set up to deliver some awesome updates in the dashboard for 2022.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hasura Backend Plus is replaced by Hasura Auth and Hasura Storage
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5ZO3eus---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71ubc6er5qw67ycj3zsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5ZO3eus---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71ubc6er5qw67ycj3zsw.png" alt="Image description" width="880" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hasura Backend Plus is replaced by Hasura Auth and Hasura Storage. The new authentication service is very similar to Hasura Backend Plus with one big difference; the users table has been moved to the auth schema and merged with the accounts table. This change was necessary to more easily release new features to Hasura Auth and for customers to track migrations separately from Hasura Auth.&lt;/p&gt;

&lt;p&gt;Storage has been fundamentally redesigned and now uses Hasura Permissions instead of the previous rules engine using rules.yaml. This makes for a unified permission system across all services of the Nhost stack. We've also added two tables, files and buckets, in a new storage schema where file metadata is stored automatically. This means files and buckets can be treated just as any other datatype and all file metadata is accessible via your GraphQL API.&lt;/p&gt;

&lt;p&gt;It’s not only our backend services that were updated. Our client-side SDKs are also fresh!&lt;/p&gt;

&lt;h3&gt;
  
  
  SDKs
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SLR3VBjd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hmmv06h0eghvjp2ihy8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SLR3VBjd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hmmv06h0eghvjp2ihy8o.png" alt="Image description" width="880" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nhost-js-sdk&lt;/code&gt; is replaced by &lt;code&gt;@nhost/nhost-js&lt;/code&gt;. The previous &lt;code&gt;nhost-js-sdk&lt;/code&gt; only had support for authentication and storage, whereas the new &lt;code&gt;@nhost/nhost-js&lt;/code&gt; has support for authentication, storage, GraphQL, and functions.&lt;/p&gt;

&lt;p&gt;We're also working on our Flutter/Dart SDKs which are to be updated soon.&lt;/p&gt;

&lt;p&gt;The SDKs use the app’s domain from Nhost. Previously, we had multiple subdomains per app. Not anymore!&lt;/p&gt;

&lt;h3&gt;
  
  
  Domains
&lt;/h3&gt;

&lt;p&gt;Previously, we had 3 different subdomains per Nhost app. &lt;code&gt;https://xxx-[hasura | backend | api].nhost.app&lt;/code&gt;. Now, there is only a single subdomain per app: &lt;code&gt;https://xxx.nhost.run&lt;/code&gt;. Each service is then scoped under &lt;code&gt;/v1/graphql&lt;/code&gt;, &lt;code&gt;/v1/auth&lt;/code&gt;, &lt;code&gt;/v1/storage&lt;/code&gt;, &lt;code&gt;/v1/functions&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Much simpler!&lt;/p&gt;

&lt;p&gt;Another thing that should be simple is to have a proper workflow. From local development to production. We’re big fans of Netlify and Vercel and their way of deploying websites from GitHub. To do that for the backend, which we do, we needed an improved version of the CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  CLI
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oN7eKOKV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kh61rg3jl3y8hgitxgso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oN7eKOKV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kh61rg3jl3y8hgitxgso.png" alt="Image description" width="880" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have rebuilt the Nhost CLI in Go for increased stability and performance. The CLI also mimics the Nhost experience locally. We also included some productivity features in the CLI such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Separated environment per branch&lt;/strong&gt; - when you switch git branch, the CLI automatically creates a separate local environment for that branch only.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Serverless functions are automatically rebuilt&lt;/strong&gt; on every incoming request using ESbuild. Rebuilding usually takes less than 200 ms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System environment variables&lt;/strong&gt; are automatically populated with the correct values based on your environment.
The CLI is perfect for serious apps with a proper workflow:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Use CLI and develop locally&lt;/li&gt;
&lt;li&gt;The CLI will automatically track database migrations and Hasura metadata&lt;/li&gt;
&lt;li&gt;Push your code to GitHub&lt;/li&gt;
&lt;li&gt;Nhost will automatically deploy database migrations, Hasura metadata, and serverless functions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now you understand why Nhost is for the backend, what Netlify and Vercel are for the frontend.&lt;/p&gt;

&lt;p&gt;Speaking of serverless functions...&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless functions
&lt;/h3&gt;

&lt;p&gt;Previously, we had a “custom API” which was a type of serverless function hosted on Google cloud run, using a mix of Docker and express. This worked OK but had some drawbacks.&lt;/p&gt;

&lt;p&gt;Our new serverless functions are built for and deployed on, AWS Lambda (just like Netlify and Vercel) and deployed physically close to your backend in the same AWS region. The new serverless functions also enable simple code, types, and package sharing between the front and backend. Perfect if you want to use JavaScript or TypeScript for both your frontend and backend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure
&lt;/h2&gt;

&lt;p&gt;Previously, we used Digital Ocean to host Nhost projects. We did this with a combination of virtual machines (Droplets), RabbitMQ, and custom scripts to manage docker-compose.yaml files. This was error-prone and we were not able to easily scale a customer's backend.&lt;/p&gt;

&lt;p&gt;It was time to level up!&lt;/p&gt;

&lt;h3&gt;
  
  
  From DigitalOcean to AWS
&lt;/h3&gt;

&lt;p&gt;From now on we’re using AWS, and we rebuilt our infrastructure from the ground up with a single focus in mind: Scale!&lt;/p&gt;

&lt;p&gt;We’re now able to scale any Nhost app horizontally in terms of GraphQL, Authentication, Storage, and serverless functions.&lt;/p&gt;

&lt;p&gt;The database is still a bottleneck when it comes to scaling, however, we're using AWS RDS for all our client's databases which enables us to deliver:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic software patching&lt;/li&gt;
&lt;li&gt;Automatic fail-over&lt;/li&gt;
&lt;li&gt;High-availability&lt;/li&gt;
&lt;li&gt;Vertically scaling up to 32 vCPUs and 244 GiB of RAM&lt;/li&gt;
&lt;li&gt;Read replicas&lt;/li&gt;
&lt;li&gt;... and much more!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS also has 26 regions around the world, which will help us deliver backends close to our customers. Something that is particularly important with new national data protection laws.&lt;/p&gt;

&lt;p&gt;We’ll continue building out our infrastructure in 2022 to support all types of customers, from students, indie hackers, agencies, startups, SMB to enterprise customers.&lt;/p&gt;

&lt;p&gt;In summary, Nhost is ready for scale and our new infrastructure will help us ship new features faster.&lt;/p&gt;

&lt;p&gt;This is the beginning of something big!&lt;/p&gt;




&lt;h2&gt;
  
  
  FAQ for customers using Nhost v1
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How should I get started with Nhost v2?
&lt;/h3&gt;

&lt;p&gt;If you are brand new to Nhost, we recommend reading our Get Started guide in our documentation.&lt;/p&gt;

&lt;p&gt;If you are already hosting your apps on Nhost please read our Migration Guide on how to migrate your old app to Nhost v2.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do I need to sign up for a new account for Nhost v2?
&lt;/h3&gt;

&lt;p&gt;If you are new to Nhost just sign up for a new account.&lt;/p&gt;

&lt;p&gt;If you are an existing customer, just log in with your existing account details. We have ported your login details over.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I access my old apps that I created in the previous version of Nhost?
&lt;/h3&gt;

&lt;p&gt;You can still access your old apps at &lt;a href="https://console.nhost.io"&gt;https://console.nhost.io&lt;/a&gt;. However, we urge you to migrate your apps to Nhost v2 as we're planning on shuting down the old platform later this year. More information will follow.&lt;/p&gt;

&lt;h3&gt;
  
  
  How do I get support with Nhost?
&lt;/h3&gt;

&lt;p&gt;For in-depth technical questions, we recommend raising a Github Discussion this keeps it public and searchable for the community.&lt;/p&gt;

&lt;p&gt;For quick questions, please join us on &lt;a href="https://nhost.io/discord"&gt;Discord&lt;/a&gt;. We have an #ask-anything channel where you can get help from us and the community.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
