<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aidan Gee</title>
    <description>The latest articles on DEV Community by Aidan Gee (@aidangee).</description>
    <link>https://dev.to/aidangee</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aidangee"/>
    <language>en</language>
    <item>
      <title>5 Tips to Scale your App</title>
      <dc:creator>Aidan Gee</dc:creator>
      <pubDate>Mon, 18 Oct 2021 08:34:46 +0000</pubDate>
      <link>https://dev.to/aidangee/5-tips-to-scale-your-app-11ck</link>
      <guid>https://dev.to/aidangee/5-tips-to-scale-your-app-11ck</guid>
      <description>&lt;p&gt;Original Post: &lt;a href="https://aidangee.dev/blog/5-tips-to-help-scale-your-app-scale"&gt;https://aidangee.dev/blog/5-tips-to-help-scale-your-app-scale&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;You want your multichannel application to grow and have more users. But as this grows you will need to be able to handle the increased traffic whilst also balancing the cost of infrastructure. If you get that big brand customer you don't want them to take down your service for themselves and other customers! (seen this first hand!)&lt;/p&gt;

&lt;p&gt;The quick solution is often to vertically or horizontally scale the compute, but costs can quickly get out of control. &lt;/p&gt;

&lt;p&gt;I have worked on both small and extremely high traffic applications and wanted to share some techniques I have seen successfully used.   &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Serverless
&lt;/h2&gt;

&lt;p&gt;Serverless Architecture is ideal for allowing an initially small web application to scale over time. Serverless infrastructure is both auto-scaling and pay per use. So whilst your app is small you will not need to pay for idle compute costs and when the time comes you have the capacity for significant growth in the future.&lt;/p&gt;

&lt;p&gt;This fits well with applications that have a 'spikey' traffic pattern. For example ticket sales or product launches, in these use cases the user traffic is generally condensed into a short window of time and serverless compute like lambda or cloud functions allow for the instant scale needed to support this. Whilst keeping costs low. &lt;/p&gt;

&lt;p&gt;This doesn't just end with compute, there is a growing number of serverless database solutions as well. DynamoDB, MongoDB, FaunaDB, Aurora these all offer a serverless way to store your data. Meaning there is little to no infrastructure for you to worry about managing across your stack.&lt;/p&gt;

&lt;p&gt;Example of a Serverless Application built on top of AWS:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cuSYsdhP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d1.awsstatic.com/diagrams/Serverless_Architecture.5434f715486a0bdd5786cd1c084cd96efa82438f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cuSYsdhP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://d1.awsstatic.com/diagrams/Serverless_Architecture.5434f715486a0bdd5786cd1c084cd96efa82438f.png" alt="AWS Serverless stack diagram" title="AWS Serverless Stack from https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What to check out: &lt;a href="https://www.serverless.com/"&gt;Serverless Framework&lt;/a&gt;, &lt;a href="https://aws.amazon.com/lambda/"&gt;Lambda&lt;/a&gt;, &lt;a href="https://cloud.google.com/functions"&gt;Cloud Functions&lt;/a&gt;, &lt;a href="https://cloud.google.com/run"&gt;Cloud Run&lt;/a&gt;, &lt;a href="https://workers.cloudflare.com/"&gt;Cloudflare Workers&lt;/a&gt;, &lt;a href="https://www.netlify.com/"&gt;Netlify&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  2. Client side rendering / Pre-rendering / Hybrid
&lt;/h2&gt;

&lt;p&gt;Depending on the user experience you need to provide and your SEO requirements, the way you render your UI can make a big difference to how your application scales. &lt;/p&gt;

&lt;p&gt;If your application is purely server side rendered, it might be worth asking yourself the question "does it need to be?". Moving some server-side work to the user's browser or pre-rendering it at build time could save on some resources. For example, in the past I have moved an e-commerce website's checkout process to an SPA which cut down the traffic hitting their web servers on average by 30%. This made a big difference to infrastructure costs and scale at high traffic events. With little change to the users experience. &lt;/p&gt;

&lt;p&gt;This is a varied subject based on your use case, I have written a &lt;a href="https://aidangee.dev/blog/quick-tip-javascript-rendering"&gt;whole post on rendering options with JavaScript&lt;/a&gt; if you want to learn more.&lt;/p&gt;

&lt;p&gt;Theres also a &lt;a href="https://www.youtube.com/watch?v=860d8usGC0o"&gt;fantastic talk&lt;/a&gt; recently from Rich Harris (Svelte Maintainer) on the trade-offs between server rendering and SPA's. &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  3. Queues &amp;amp; Asynchronous Workers
&lt;/h2&gt;

&lt;p&gt;Queues and background workers can help improve the scalability of your application by offloading slow or intensive tasks. This is commonly by decoupling a user action from some processing that could cause a backlog and effect performance.&lt;/p&gt;

&lt;p&gt;Some common examples of this might be uploading an image, generating a report or processing a video. If we take the example of generating a report for a large dataset, this might take a few minutes for a user. You do &lt;em&gt;not&lt;/em&gt; want this process using up a large amount of resources and holding open connections on the same infrastructure that is dealing with the requests of your other users. You risk an extra large report or a few reports running simultaneously to start effecting performance or cost. &lt;/p&gt;

&lt;p&gt;Shifting this work to the background can allow this to be dealt with predictably. Sending these requests to a queue that can push (or pull) the work to a set of worker processes allows you to control the scaling of infrastructure and not risk the affecting other requests coming in. &lt;/p&gt;

&lt;p&gt;What to check out: &lt;a href="https://aws.amazon.com/sqs/"&gt;AWS SQS&lt;/a&gt;, &lt;a href="https://cloud.google.com/tasks"&gt;Cloud Tasks&lt;/a&gt;, &lt;a href="https://render.com/docs/background-workers"&gt;render.com background workers&lt;/a&gt;, &lt;a href="https://devcenter.heroku.com/articles/background-jobs-queueing"&gt;Heroku workers&lt;/a&gt;, &lt;a href="https://github.com/gocraft/work"&gt;Go Worker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  4. Spot / Preemptible Instances
&lt;/h2&gt;

&lt;p&gt;If you have auto-scaling servers, costs can quickly rack up as you grow. Spot instances (&lt;a href="https://cloud.google.com/compute/docs/instances/preemptible"&gt;preemptible VMs on GCP&lt;/a&gt;) make use of unused compute for upto 90% savings on the usual on-demand price. &lt;/p&gt;

&lt;p&gt;Integrating with spot instances has been made much easier the past year or so with integrations with EC2 auto-scaling and &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-fleet.html"&gt;Spot fleet&lt;/a&gt;. But there can still be some responsibility to handle interruptions and rebalancing if your application is not 'fault-tolerant'. To take away these concerns services like &lt;a href="https://spot.io/solutions/amazon-web-services/"&gt;Spot&lt;/a&gt; can guarantee a high availability and abstract away some of the complexities. &lt;/p&gt;

&lt;p&gt;What to check out: &lt;a href="https://aws.amazon.com/ec2/spot/"&gt;Spot intances&lt;/a&gt;, &lt;a href="https://cloud.google.com/compute/docs/instances/preemptible"&gt;Preemptible VM instances&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  5. Event Driven Architecture
&lt;/h2&gt;

&lt;p&gt;Applications usually start with monolithic design. A singular running process to do everything from logging &amp;amp; metrics to sending emails. If part of that fails, or you get a long running task, then this could clog up the system.&lt;/p&gt;

&lt;p&gt;An event driven approach allows you to decouple these different pieces into seperate services that can scale &amp;amp; fail independently of one another. This system uses events to trigger and communicate between decoupled services. &lt;br&gt;
 &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y96f34Ir--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/wubo/image/upload/f_auto/v1634050228/blog/simple-event-driven-architecture-diagram_zsgrmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y96f34Ir--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/wubo/image/upload/f_auto/v1634050228/blog/simple-event-driven-architecture-diagram_zsgrmg.png" alt="A simple example of event driven architecture"&gt;&lt;/a&gt;&lt;br&gt;
 &lt;/p&gt;

&lt;p&gt;What to check out: &lt;a href="https://pages.awscloud.com/AWS-Learning-Path-How-to-Use-Amazon-EventBridge-to-Build-Decoupled-Event-Driven-Architectures_2020_LP_0001-SRV.html"&gt;AWS video series&lt;/a&gt;, &lt;a href=""&gt;Event Bridge&lt;/a&gt;, &lt;a href="https://cloud.google.com/eventarc/docs/overview%5D"&gt;Eventarc&lt;/a&gt;&lt;br&gt;
 &lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: Monitoring!
&lt;/h2&gt;

&lt;p&gt;Something that I think is often overlooked is the power of performance monitoring. How do you know if a new release is using significantly more resources, is it from your metrics or when you get the big infrastructure bill at the end of the month?&lt;/p&gt;

&lt;p&gt;Theres a ton of Application monitoring solutions out there with integrations for all types of languages. &lt;a href="https://newrelic.com/"&gt;New Relic&lt;/a&gt; &amp;amp; &lt;a href="https://www.datadoghq.com/"&gt;DataDog&lt;/a&gt; are 2 popular examples. They allow you to get much more fined grained than just CPU, Memory usage &amp;amp; scaling. You can see information such as :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;External API performance&lt;/li&gt;
&lt;li&gt;Code-level visibility e.g. inspecting a particular methods performance&lt;/li&gt;
&lt;li&gt;Traces for slow performing request&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And way more !&lt;/p&gt;

&lt;p&gt;Performance monitoring can help you stay scalable whilst you continue to make changes. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>architecture</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Quick Tip - JavaScript Rendering </title>
      <dc:creator>Aidan Gee</dc:creator>
      <pubDate>Tue, 06 Jul 2021 18:20:37 +0000</pubDate>
      <link>https://dev.to/aidangee/quick-tip-javascript-rendering-37af</link>
      <guid>https://dev.to/aidangee/quick-tip-javascript-rendering-37af</guid>
      <description>&lt;p&gt;Originally Posted - &lt;a href="https://aidangee.dev/blog/quick-tip-javascript-rendering"&gt;https://aidangee.dev/blog/quick-tip-javascript-rendering&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;Popular JavaScript frameworks like &lt;a href="https://nextjs.org/"&gt;Next.js&lt;/a&gt;, &lt;a href="https://nuxtjs.org/"&gt;Nuxt.js&lt;/a&gt; and &lt;a href="https://kit.svelte.dev/"&gt;SvelteKit&lt;/a&gt; come with a number of rendering options included. But what does SSR, ISR, SSG and all the other fancy acronyms mean? &lt;/p&gt;

&lt;h3&gt;
  
  
  Client Side Rendering
&lt;/h3&gt;

&lt;p&gt;Minimal static HTML is served back to the user, this will most likely only contain links to scripts and CSS files. The JavaScript is in charge of generating the HTML in the browser. &lt;/p&gt;

&lt;p&gt;Because there are no servers needed you will often see platforms that host static websites for free with a generous amount of network bandwidth e.g. &lt;a href="https://render.com/"&gt;Render&lt;/a&gt;, &lt;a href="https://firebase.google.com/docs/hosting"&gt;Firebase Hosting&lt;/a&gt;, &lt;a href="https://vercel.com/"&gt;Vercel&lt;/a&gt;, &lt;a href="https://www.netlify.com/"&gt;Netlify&lt;/a&gt;. Or you could run this yourself in &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html"&gt;AWS storing the files in S3 and backing with a CloudFront CDN&lt;/a&gt; for a very low cost (often a few cents a month). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple deployments, just an index.html file and built JavaScript&lt;/li&gt;
&lt;li&gt;Easy to scale with static files, that requires no server side compute to serve to the user. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SEO requirements can be more complicated (&lt;a href="https://www.youtube.com/playlist?list=PLKoqnv2vTMUPOalM1zuWDP9OQl851WMM9"&gt;good video series about that on Google search YouTube channel&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Some performance metrics can be affected, for example &lt;a href="https://web.dev/cls/"&gt;CLS&lt;/a&gt; &amp;amp; &lt;a href="https://web.dev/fcp/"&gt;FCP&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;All JavaScript is shipped to the client, so it must not contain any secrets / private data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ideal for&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Applications that require authentication to use&lt;/li&gt;
&lt;li&gt;Applications without SEO requirements&lt;/li&gt;
&lt;li&gt;Applications that receive spikes in traffic (static HTML does not need compute that has to scale)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Static Generation (SSG)
&lt;/h3&gt;

&lt;p&gt;HTML is generated at &lt;strong&gt;build time&lt;/strong&gt; and the full static HTML will be served over the network to the user.&lt;/p&gt;

&lt;p&gt;This generates static HTML files, which means much of the same low cost hosting solutions as the client side rendering example can be used. The difference being that with Static Generation you will have a HTML file per page generated, rather than just an index.html.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy to scale with static files, no servers needed&lt;/li&gt;
&lt;li&gt;Faster response times than if the file was generated on the fly&lt;/li&gt;
&lt;li&gt;Full HTML content served to the user which benefits SEO, FCP, CLS over client side rendering&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Longer build times, that can increase as content on an app increases&lt;/li&gt;
&lt;li&gt;Will often have to be rebuilt to update page content&lt;/li&gt;
&lt;li&gt;Cannot contain personalised content, the same generated page is served to all users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ideal for&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Applications where content / data is not required to be updated frequently&lt;/li&gt;
&lt;li&gt;Applications with high performance requirements&lt;/li&gt;
&lt;li&gt;Applications that receive spikes in traffic (static HTML does not need compute that has to scale)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Server Side Rendering
&lt;/h3&gt;

&lt;p&gt;HTML is generated on request and the full static HTML will be served over the network to the user.&lt;/p&gt;

&lt;p&gt;As it implies in the name, this requires a server side component. Each request will need to use some compute to generate the HTML (if you are not using a cache). You could use a serverless platform here like &lt;a href="https://begin.com/"&gt;Begin&lt;/a&gt;, &lt;a href="https://vercel.com/"&gt;Vercel&lt;/a&gt; or &lt;a href="https://www.netlify.com/"&gt;Netlify&lt;/a&gt; to mitigate having to manage any servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full HTML content served to the user which benefits SEO, FCP, CLS over client side rendering&lt;/li&gt;
&lt;li&gt;Data can be dynamic on each request&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each request to the origin requires some server side compute resource&lt;/li&gt;
&lt;li&gt;Slower response time than static generation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ideal for&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Applications where content / data is updated often&lt;/li&gt;
&lt;li&gt;Applications with personalised content&lt;/li&gt;
&lt;li&gt;Applications with strict SEO requirements &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hybrid
&lt;/h2&gt;

&lt;p&gt;This can be considered a mixture of the above approaches. Frameworks like Next.js, Nuxt.js &amp;amp; SvelteKit (to name a few) have excellent APIs to achieve this. &lt;/p&gt;

&lt;p&gt;To demonstrate this let us look at a simple example scenario with SvelteKit. Imagine we are building a blog with the following specification - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A static welcome homepage&lt;/li&gt;
&lt;li&gt;A blog page that lists posts with content from a CMS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We could split these pages into different categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The homepage is static and won't change, so we can generate this at build time&lt;/li&gt;
&lt;li&gt;The blog listing page, well that depends. We could generate the list page at build time with static generation but if the data source for blogs is often being updated then it might make sense to use SSR which would allow the page to update as the content updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This sounds like it might be complicated to mix and match, but the frameworks make this easy. &lt;/p&gt;

&lt;p&gt;Homepage (pages/index.svelte)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;context=&lt;/span&gt;&lt;span class="s"&gt;"module"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="c1"&gt;// exporting this variable is all you need to do&lt;/span&gt;
    &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prerender&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt; 

&lt;span class="nt"&gt;&amp;lt;svelte:head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;Homepage&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"description"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"My homepage"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/svelte:head&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;main&amp;gt;&lt;/span&gt;
    &lt;span class="c"&gt;&amp;lt;!--  content goes here --&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/main&amp;gt;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Blog List (pages/blog/index.svelte)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;context=&lt;/span&gt;&lt;span class="s"&gt;"module"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
     &lt;span class="c1"&gt;// export a load function to grab data server side&lt;/span&gt;
    &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;load&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;blogs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://mycms.io&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;json&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;blogs&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="c1"&gt;// we have static generation disabled&lt;/span&gt;
    &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prerender&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt; 

&lt;span class="nt"&gt;&amp;lt;script&amp;gt;&lt;/span&gt;
    &lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;blogs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;   
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;main&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;BlogPosts&lt;/span&gt; &lt;span class="na"&gt;blogs=&lt;/span&gt;&lt;span class="s"&gt;{blogs}/&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/main&amp;gt;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;If you wanted to switch this to being statically generated like discussed above, you could just set prerender to true. (be aware, in svelte the load function is run on the client and server)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Incremental Static Regeneration (ISR)
&lt;/h3&gt;

&lt;p&gt;One more I wanted to include under the hybrid list is a feature of Nuxt.js and Next.js they call &lt;a href="https://vercel.com/docs/next.js/incremental-static-regeneration#"&gt;Incremental Static Regeneration (ISR)&lt;/a&gt;. This can be viewed as a middle ground between SSR and SSG, if you use ISR then the page will be generated at build time like it would if you use static generation. But with ISR you specify a duration, and after this duration passed the page will be regenerated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3e2JQlqH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://vercel.com/_next/image%3Furl%3D%252Fdocs-proxy%252Fstatic%252Fdocs%252Fisr%252Fregeneration.png%26q%3D75%26w%3D1080" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3e2JQlqH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://vercel.com/_next/image%3Furl%3D%252Fdocs-proxy%252Fstatic%252Fdocs%252Fisr%252Fregeneration.png%26q%3D75%26w%3D1080" alt="ISR Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this, you get the benefits of static generation but the increased frequency of updates that you get from SSR. This would actually be a good solution to our blogs list page from above. ISR can allow us to have a pre-rendered page but will update frequently enough to support any new blogs being added to the CMS. &lt;/p&gt;

&lt;h2&gt;
  
  
  Tip
&lt;/h2&gt;

&lt;p&gt;Unfortunately, there is not one answer to how you should render your application. It is highly specific to what you are building. The good news is that hybrid rendering makes this specificity possible to allow best mix in your application. &lt;/p&gt;

&lt;p&gt;For the &lt;strong&gt;best performance and low cost, static generation is recommended&lt;/strong&gt;. I find myself saying 'can I pre-render this?' more and more, and often when I have something on the page that is dynamic like comments on a blog post. I'll mix in a component that grabs and renders that data client side before reaching for SSR. Why? This allows the initial content for the user to be pre-rendered with the dynamic part sprinkled on top in the client.  &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>svelte</category>
      <category>javascript</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Getting Started with Machine Learning Models in the Browser with TensorFlow.js</title>
      <dc:creator>Aidan Gee</dc:creator>
      <pubDate>Wed, 30 Jun 2021 21:06:29 +0000</pubDate>
      <link>https://dev.to/aidangee/getting-started-with-machine-learning-models-in-the-browser-with-tensorflow-js-14ah</link>
      <guid>https://dev.to/aidangee/getting-started-with-machine-learning-models-in-the-browser-with-tensorflow-js-14ah</guid>
      <description>&lt;p&gt;Originally posted here: &lt;a href="https://aidangee.dev/blog/getting-started-with-tensorflow-in-the-browser" rel="noopener noreferrer"&gt;https://aidangee.dev/blog/getting-started-with-tensorflow-in-the-browser&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;There were a great set of talks this year at &lt;a href="https://www.google.com/io" rel="noopener noreferrer"&gt;Google IO 2021&lt;/a&gt;, one that piqued my interest was &lt;a href="https://www.youtube.com/watch?v=qKkjCQlS1g4" rel="noopener noreferrer"&gt;this talk on machine learning &amp;amp; TensorFlow&lt;/a&gt;. There is a lot great new stuff here, but I'll summarize some key points from a web perspective. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; TensorFlow Lite models can now directly be run on the web in the browser 🎉&lt;/li&gt;
&lt;li&gt; Supports running all TFLite Task Library models for example image classification, objection detection, image segmentation and NLP&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I wanted to see how viable it is to use ML models on device in the browser.&lt;/p&gt;

&lt;h2&gt;
  
  
  TensorFlow.js &amp;amp; Pre-trained Models
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.tensorflow.org/js/" rel="noopener noreferrer"&gt;TensorFlow.js&lt;/a&gt; is a library for machine learning in JavaScript and can be used both in the browser and Node.js. We can use this library to build, run and train supported models. &lt;/p&gt;

&lt;p&gt;What is great for starters in the ML world (like me), is that this library comes with a &lt;a href="https://www.tensorflow.org/js/models" rel="noopener noreferrer"&gt;number of pre-trained TensorFlow.js models&lt;/a&gt;. So anyone can jump in and start using things like image object detection or text toxicity detection without the huge barrier to entry that is model training. &lt;/p&gt;

&lt;p&gt;Let's take a look at how the code looks for running object detection on an image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Note: Require the cpu and webgl backend and add them to package.json as peer dependencies.&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@tensorflow/tfjs-backend-cpu&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@tensorflow/tfjs-backend-webgl&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;load&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@tensorflow-models/coco-ssd&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;img&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Load the model.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;// Classify the image.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;img&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Predictions: &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;predictions&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;})();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So in just a few lines of JavaScript we have managed to load and run a ML Model in the browser on an image 🎉. This is not even restricted to images, the detect method will accept a canvas element, video element and a &lt;a href="https://js.tensorflow.org/api/latest/#tensor" rel="noopener noreferrer"&gt;3D tensor shape&lt;/a&gt;. So quite quickly here, we could do something like track objects as a video is playing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Note: Require the cpu and webgl backend and add them to package.json as peer dependencies.&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@tensorflow/tfjs-backend-cpu&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@tensorflow/tfjs-backend-webgl&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;load&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@tensorflow-models/coco-ssd&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;videoEl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;video&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Load the model.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;// Classify the frame of the video.&lt;/span&gt;
  &lt;span class="c1"&gt;// timeupdate is a quick way to run this as the video plays&lt;/span&gt;
  &lt;span class="nx"&gt;videoEl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addEventListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;timeupdate&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;predictions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;detect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;videoEl&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Predictions: &lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;predictions&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="p"&gt;})();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The predictions you get back from the detect function look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;bbox&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;person&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;score&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.8380282521247864&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;bbox&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;x&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;y&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;width&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;height&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sports ball&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;score&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;0.74644153267145157&lt;/span&gt;
  &lt;span class="p"&gt;}]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note : The position (bbox) variables you get back will be based on the original video resolution. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You could use this data to detect some context of what was in a particular video or track certain objects in the video as it plays ... all in the browser. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres.cloudinary.com%2Fwubo%2Fimage%2Fupload%2Fc_scale%2Cf_auto%2Cq_72%2Cw_1435%2Fv1625085017%2Fblog%2Fml-obbect-recognition-video.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres.cloudinary.com%2Fwubo%2Fimage%2Fupload%2Fc_scale%2Cf_auto%2Cq_72%2Cw_1435%2Fv1625085017%2Fblog%2Fml-obbect-recognition-video.png" alt="object recognition highlighted football in a video"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Thoughts
&lt;/h2&gt;

&lt;p&gt;I could not believe how easy this was to get going with. The pre-trained models are a breeze to use and I would definitely recommend checking out the &lt;a href="https://www.tensorflow.org/js/models" rel="noopener noreferrer"&gt;full list&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Depending on how you plan to use this functionality, something to keep in mind is the download times of models and how this effects the UX. For example, I found the Coco SSD model to take about 10 seconds to download on a solid Wi-Fi connection. So if your application relied on this, you are going to have extremely long start up times and probably frustrated users. Loading them in the background before the user needs them would be a nicer solution. &lt;/p&gt;

&lt;p&gt;I am excited to see this space develop over the next few years. I think we all know about the growth of AI / ML but having this be available to run so easily with JavaScript in the browser can only help accelerate it's usage. &lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Deno Deploy Beta - First look &amp; start up times</title>
      <dc:creator>Aidan Gee</dc:creator>
      <pubDate>Tue, 29 Jun 2021 14:38:00 +0000</pubDate>
      <link>https://dev.to/aidangee/deno-deploy-beta-first-look-start-up-times-4gj3</link>
      <guid>https://dev.to/aidangee/deno-deploy-beta-first-look-start-up-times-4gj3</guid>
      <description>&lt;p&gt;Originally Posted : &lt;a href="https://aidangee.dev/blog/deno-deploy-beta-first-look" rel="noopener noreferrer"&gt;https://aidangee.dev/blog/deno-deploy-beta-first-look&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Deno
&lt;/h3&gt;

&lt;p&gt;If you haven't heard of &lt;a href="https://github.com/denoland/deno" rel="noopener noreferrer"&gt;Deno&lt;/a&gt; (pronounced 'dee-no'), it is a &lt;strong&gt;JavaScript&lt;/strong&gt; and &lt;strong&gt;TypeScript&lt;/strong&gt; runtime by the creator of Node.js Ryan Dahl. &lt;/p&gt;

&lt;p&gt;In a nutshell Deno allows you to run JavaScript on the V8 engine much like Node.js does, but there are a few key differences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supports Typescript out of the box&lt;/li&gt;
&lt;li&gt;No centralised package manager like NPM&lt;/li&gt;
&lt;li&gt;Aims to have a browser compatible API (e.g. fetch and web workers)&lt;/li&gt;
&lt;li&gt;Is 'secure' by default, you must explicitly enable network access, file access etc.&lt;/li&gt;
&lt;li&gt;Built in tools for code formatting, linting, test running and &lt;a href="https://deno.land/manual/tools" rel="noopener noreferrer"&gt;more...&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Has a set of &lt;a href="https://deno.land/std/" rel="noopener noreferrer"&gt;standardised modules&lt;/a&gt; reviewed by the Deno team&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ryan Dahl himself has spoken about these decisions in a number of talks. I would recommend taking a look at this talk he gave &lt;a href="https://www.youtube.com/watch?v=M3BM9TB-8yA" rel="noopener noreferrer"&gt;'10 Things I Regret About Node.js'&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want to dive deeper into Deno there are a great set of resources on the &lt;a href="https://github.com/denolib/awesome-deno" rel="noopener noreferrer"&gt;“awesome deno” GitHub&lt;/a&gt; that you can use. &lt;/p&gt;

&lt;h3&gt;
  
  
  So what is Deno Deploy then?
&lt;/h3&gt;

&lt;p&gt;From Ryan Dahl himself :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Deno Deploy is a multi-tenant JavaScript engine running in 25 data centers across the world. The service deeply integrates cloud infrastructure with the V8 virtual machine, allowing users to quickly script distributed HTTPS servers. This novel “serverless” system is designed from the ground up for modern JavaScript programming.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ok, but what does this mean? Deno Deploy is wanting to be &lt;em&gt;the&lt;/em&gt; way you deploy your server-side Deno code. By using the service you get fast CI/CD and serverless deployments optimised for Deno.&lt;/p&gt;

&lt;p&gt;For an in depth look at the features of Deno Deploy, checkout their latest &lt;a href="https://deno.com/blog/deploy-beta1" rel="noopener noreferrer"&gt;blog post&lt;/a&gt; or &lt;a href="https://deno.com/deploy/docs" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  First look
&lt;/h3&gt;

&lt;p&gt;Reminder, this is the first version Deno Deploy and is a beta. So I wouldn't expect this to be the final product, but it is still fun to see what is already available. &lt;/p&gt;

&lt;p&gt;On that note, the initial public beta for Deno Deploy is free to use. So it is a great time to jump in and try it, they have a list of limits that apply during the beta &lt;a href="https://deno.com/deploy/docs/pricing-and-limits" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;When you sign up and login to Deno Deploy you will be asked to create a project to house the Deno services you intend to deploy. You will also be met with a couple of examples ready to deploy at the click of a button.&lt;/p&gt;

&lt;p&gt;Project Dashboard: &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres.cloudinary.com%2Fwubo%2Fimage%2Fupload%2Fc_scale%2Cf_auto%2Cq_auto%3Abest%2Cw_1080%2Fv1624639605%2Fblog%2Fdeno-deploy-project-dashboard_sf1zyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fres.cloudinary.com%2Fwubo%2Fimage%2Fupload%2Fc_scale%2Cf_auto%2Cq_auto%3Abest%2Cw_1080%2Fv1624639605%2Fblog%2Fdeno-deploy-project-dashboard_sf1zyp.png" title="Deno Deploy project dashboard" alt="Deno Deploy project dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's look at that code for the Hello World example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;addEventListener("fetch", (event) =&amp;gt; {
  event.respondWith(new Response("Hello world"));
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if you have been using Node.js with express or running on AWS lambda this might look a bit alien. What I find interesting about this example is that this is not code you can just pop into Deno (&lt;a href="https://github.com/denoland/deno/issues/5957#issuecomment-722568905" rel="noopener noreferrer"&gt;yet&lt;/a&gt;) to run a server locally. It's Deno Deploy sprinkling some of that platform &lt;em&gt;magic&lt;/em&gt; on top that allows you to use the Fetch Event API that you would use in a Service Worker in your browser. So I'm already thinking this is going to be more of an all encompassing Deno platform rather than just a hosting service.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying
&lt;/h3&gt;

&lt;p&gt;There are a couple of ways you can deploy your code. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect your GitHub repo&lt;/li&gt;
&lt;li&gt;Provide a URL to a repository&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first is almost a requirement of a hosting platform nowadays, you can connect a repository from GitHub and have it build and deploy. What is nice to see is the inclusion of preview deployments. This will create a deployment whenever you push to a branch. I love this feature, it makes testing and pull request reviews just that bit faster. &lt;/p&gt;

&lt;p&gt;The second point however, is very ... Deno. It fits in with the theme of decentralised packages and importing via a URL. I can see this making it really easy to share your open source service with others and letting them easily host it themselves, a nice touch.&lt;/p&gt;

&lt;h3&gt;
  
  
  Impressive Start Times
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;We believe Deploy is the fastest serverless system available. We hope to nail down this bold claim with performance benchmarks in future releases.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is a quote from their blog that made me want to test out this beta. It is a bold claim to say the least, but I think performance needs to be something every developer has in their mind when building a modern Web App. Especially with the push from Google on &lt;a href="https://web.dev/vitals/" rel="noopener noreferrer"&gt;core web vitals&lt;/a&gt;, and it's effect on your websites SEO. The option of a fast easy to use serverless platform is right up there on my Christmas list. &lt;/p&gt;

&lt;p&gt;So, what I wanted to look at was the speed of the platform itself. The simple hello world app is perfect for a basic test, how fast does it respond with a simple hello world? For these tests, I compared the &lt;a href="https://web.dev/time-to-first-byte/" rel="noopener noreferrer"&gt;TTFB&lt;/a&gt; over a number of requests. &lt;/p&gt;

&lt;h4&gt;
  
  
  Deno Deploy
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;TTFB of Cold Start&lt;/strong&gt;: 575 ms (avg of 5 requests)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TTFB once warmed&lt;/strong&gt; : 44ms (avg of 50 requests)&lt;/p&gt;

&lt;p&gt;For a Beta, I think these are impressive numbers. To give some perspective, I also tested out &lt;a href="https://www.netlify.com/" rel="noopener noreferrer"&gt;Netlify&lt;/a&gt; who as part of their platform provide Netlify Functions. A similar easy to use serverless deployment experience but for Node.js. if you put the same 'Hello World' example on Netlify Functions (deployed in Europe) the same tests looked like :&lt;/p&gt;

&lt;h4&gt;
  
  
  Netlify
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;TTFB of Cold Start&lt;/strong&gt;: 812 ms (avg of 5 requests)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TTFB once warmed&lt;/strong&gt; : 138 ms (avg of 50 requests)&lt;/p&gt;

&lt;p&gt;Now I don't think or claim these tests are an exact science and I definitely don't think this means you should be choosing Deno Deploy over Netlify just yet. But I believe what we are really seeing here is the difference in &lt;a href="https://en.wikipedia.org/wiki/Edge_computing" rel="noopener noreferrer"&gt;compute on the edge&lt;/a&gt; vs a data centre. And the speed boost Deno Deploy is getting here is probably mostly due to that. Compute on the edge is becoming more common nowadays, popular options like &lt;a href="https://aws.amazon.com/lambda/edge/" rel="noopener noreferrer"&gt;Lambda@Edge&lt;/a&gt; and &lt;a href="https://workers.cloudflare.com/" rel="noopener noreferrer"&gt;Cloudflare Workers&lt;/a&gt; have steadily been getting better over the passed few years and much more accessible to developers. Deno Deploy having this tech from the get-go does give them a speed advantage over some existing platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thoughts
&lt;/h3&gt;

&lt;p&gt;I think there are some good early signs here. Firstly, there is a free open beta that anyone can try out and give feedback. This is great that anyone in the community can check it out and potentially help shape it. &lt;/p&gt;

&lt;p&gt;The tech is there to provide a great experience for developers and end users already. Even at this early stage it is easy to use and get up and running. The edge compute is a nice performance inclusion. &lt;/p&gt;

&lt;p&gt;I have my reservations about some of the platform magic that seems to be in there at the moment. The Deno deploy homepage specifically lists 'No Vendor Lock In' as one of its aims. But it looks like there is already some features in there that would make it hard to move away from the platform. Just look at &lt;a href="https://deno.com/blog/deploy-beta1#broadcastchannel" rel="noopener noreferrer"&gt;Broadcast Channels&lt;/a&gt;. So this will be something I keep an eye on. &lt;/p&gt;

&lt;p&gt;The Deno Team are aiming for the end of this year to enter General Availability. So keep an eye out for updates and that all important pricing model.&lt;/p&gt;

</description>
      <category>deno</category>
      <category>javascript</category>
      <category>typescript</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
