<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matuzalém Teles</title>
    <description>The latest articles on DEV Community by Matuzalém Teles (@matuzalemsteles).</description>
    <link>https://dev.to/matuzalemsteles</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/matuzalemsteles"/>
    <language>en</language>
    <item>
      <title>LOLTV.gg - Building VLR.gg/HLTV.org for League of Legends — but way more modern and unique</title>
      <dc:creator>Matuzalém Teles</dc:creator>
      <pubDate>Wed, 21 May 2025 02:11:14 +0000</pubDate>
      <link>https://dev.to/matuzalemsteles/loltvgg-building-vlrgghltvorg-for-league-of-legends-but-way-more-modern-and-unique-52eg</link>
      <guid>https://dev.to/matuzalemsteles/loltvgg-building-vlrgghltvorg-for-league-of-legends-but-way-more-modern-and-unique-52eg</guid>
      <description>&lt;p&gt;Over the past few months, I've been working solo on building &lt;a href="https://loltv.gg" rel="noopener noreferrer"&gt;&lt;strong&gt;LOLTV.gg&lt;/strong&gt;&lt;/a&gt; — a hub for League of Legends eSports, inspired by platforms like HLTV.org (for CS) and VLR.gg (for Valorant), but with a much more modern and unique approach.&lt;/p&gt;

&lt;p&gt;My goal is to create a space where LoL eSports fans can come together to explore detailed match data, engage in meaningful discussions, and dive deep into everything from player performance to team histories — all in a way that’s clean, fast, and purpose-built for the community.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ccyvcbjp1k7rubk250r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ccyvcbjp1k7rubk250r.png" alt="Tournament Page" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s already live:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Detailed match pages&lt;/strong&gt; with timelines, scoreboards, and team stats
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Player profiles&lt;/strong&gt; with recent performance, stats, historical stats, trophies, and seasonal insights
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team pages&lt;/strong&gt; with roster info, past results, and tournament stats&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recent transfers page&lt;/strong&gt; teams roster changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tournament pages&lt;/strong&gt; bracket, standings, stages, schedule, results and stats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Forums&lt;/strong&gt; for discussion with a custom editor tailored for LoL talk (builds, meta, etc.)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community-driven news&lt;/strong&gt;, where users can submit news that gets featured like X (Twitter) posts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pro builds page&lt;/strong&gt; with champion statistics with official data from League of Legends competitions. Discover trends, champion performance, the most common pro builds, champion presence, and more.&lt;/li&gt;
&lt;li&gt;A clean UI that keeps things intuitive and fast&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Challenges
&lt;/h3&gt;

&lt;p&gt;The platform is built on top of Next.js, using Incremental Static Regeneration (ISR) to cache pages effectively. Since most match and tournament pages are historical and rarely change, they’re only revalidated when a new comment is added — which triggers a cache update for that specific page.&lt;/p&gt;

&lt;p&gt;Some UI components — like the recent activity sidebar or today’s matches — are updated on the client side, and their context changes depending on the page you’re on. This can occasionally cause a visual “blink” with SSR, something I’m still experimenting with. One idea is to move fully to client-side rendering for those pieces or find a hybrid approach that avoids unnecessary flashes while keeping the page fast.&lt;/p&gt;

&lt;p&gt;In general, Next.js with ISR works really well, but there are still challenges when it comes to rendering uncached pages with server-side rendering. These pages can sometimes take too long to load, so optimizing SSR performance for cache misses is a priority in the next iteration.&lt;/p&gt;




&lt;p&gt;There’s still a lot in progress. One of the biggest upcoming features is &lt;strong&gt;Global Stats&lt;/strong&gt; — an entirely new layer that doesn't exist on any current platform. You'll be able to segment data by patch, region, champion, and more to uncover trends and insights.&lt;/p&gt;

&lt;p&gt;We also have a &lt;strong&gt;Live Score&lt;/strong&gt; feature — real-time stats during games, along with an interactive match timeline, similar to what HLTV.org offers, but adapted for LoL.&lt;/p&gt;




&lt;p&gt;The image below shows the &lt;a href="https://loltv.gg/stats/champion/skarner" rel="noopener noreferrer"&gt;&lt;strong&gt;champion build page for Skarner&lt;/strong&gt;&lt;/a&gt; during the current season. It includes all relevant stats, preferred builds, and Skarner’s performance against other champions in official matches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl31s05ye6jhhg57ysynr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl31s05ye6jhhg57ysynr.png" alt="Skarner Build Page" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foh19lhwk38zo5pu8jo3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foh19lhwk38zo5pu8jo3p.png" alt="Skarner Build Page" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;A huge focus is on the &lt;a href="https://loltv.gg/forums" rel="noopener noreferrer"&gt;&lt;strong&gt;forum experience&lt;/strong&gt;&lt;/a&gt; — not just a message board, but a modern place to break down builds, react to patch notes, analyze pro plays, and share ideas. The advanced editor makes it easier to express complex opinions and link them with actual data from the platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuctq0xwxbxdz57z24fqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuctq0xwxbxdz57z24fqt.png" alt="Forum page" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Here are a few more screenshots to give you an idea of where things are going:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31txsoa522id96ovxyii.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31txsoa522id96ovxyii.png" alt="Match Page" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozfpbnguhxnlls6w9d3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fozfpbnguhxnlls6w9d3n.png" alt="Match overview stats" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3ycie9njmi56t4e0zdz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3ycie9njmi56t4e0zdz.png" alt="Match game scoreboard" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqyokugsn48qiltayw58o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqyokugsn48qiltayw58o.png" alt="Match game timeline" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1uwphzn2kbhhec35670.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1uwphzn2kbhhec35670.png" alt="Match page" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;If you're a fan of LoL eSports or just love data-driven platforms, I’d love your feedback. The platform is evolving fast, and a lot of features are being built based on how the community wants to engage with the game.&lt;/p&gt;

&lt;p&gt;Thanks for reading — more updates soon!&lt;/p&gt;




</description>
      <category>leagueoflegends</category>
      <category>esports</category>
      <category>react</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Fleet Serverless Function Introduction</title>
      <dc:creator>Matuzalém Teles</dc:creator>
      <pubDate>Tue, 16 Jun 2020 15:40:25 +0000</pubDate>
      <link>https://dev.to/fleet/fleet-serverless-function-introduction-5cle</link>
      <guid>https://dev.to/fleet/fleet-serverless-function-introduction-5cle</guid>
      <description>&lt;p&gt;In February this year, we announced &lt;a href="https://fleetfn.com" rel="noopener noreferrer"&gt;Fleet&lt;/a&gt; (Formerly Hole), a FaaS platform built on Node.js to be faster than other platforms and to create a more faithful integration with the ecosystem. In this post, I will clarify how all of this works and what we are bringing differently to the Serverless ecosystem and in the next article I will comment on the platform.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
What are Fleet Function?

&lt;ul&gt;
&lt;li&gt;Common issues&lt;/li&gt;
&lt;li&gt;Fleet Solution&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Scaling&lt;/li&gt;

&lt;li&gt;HTTP Rest&lt;/li&gt;

&lt;li&gt;Use cases&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  What are Fleet Function?
&lt;/h1&gt;

&lt;p&gt;It is a technology capable of executing Node.js functions that are invoked by HTTP requests with auto-scale to zero or N with the ability to execute the functions with a cold start to almost zero.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ƒ Fleet Simple HTTP Endpoint!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Live Example: &lt;a href="https://examples.runfleet.io/simple-http-endpoint/" rel="noopener noreferrer"&gt;https://examples.runfleet.io/simple-http-endpoint/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Common issues
&lt;/h2&gt;

&lt;p&gt;In a brief explanation about the cold start is when your service receives a request and the platform has to provision its function to be able to handle the request, usually following this flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Event invoke&lt;/li&gt;
&lt;li&gt;Start new VM&lt;/li&gt;
&lt;li&gt;Download code (from S3, normally.)&lt;/li&gt;
&lt;li&gt;Setup runtime&lt;/li&gt;
&lt;li&gt;Init function&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffleetfn.com%2Fimages%2Fblog%2Fother-platforms%25402x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffleetfn.com%2Fimages%2Fblog%2Fother-platforms%25402x.png" alt="Example of functions provisioning on other platforms" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The steps from 2 to 4 are what we call Cold start, in the next invocations, if the instance is available and cached the provider can skip these steps to execute the function in Warm start. There are some misunderstandings about the cold start when a function is already running and receives a new invocation, the provider will invoke a new instance with a cold start, the same happens when your application receives many invocations simultaneously, all will be with a cold start.&lt;/p&gt;

&lt;p&gt;One of the solutions that some adopt is to ping from time to time to keep the instance alive or use the concurrency provisioning service that will increase your expenses and require you to know exactly what your application's traffic spikes are, requires monitoring to prevent unnecessary expenses which for some this is very bad because it removes the idea of ​​you not worrying about the infra...&lt;/p&gt;

&lt;h2&gt;
  
  
  Fleet Solution
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffleetfn.com%2Fimages%2Ffleet-isolate-sandbox-function%403x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffleetfn.com%2Fimages%2Ffleet-isolate-sandbox-function%403x.png" alt="Fleet Solution" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Fleet Function solution for this is to execute its functions safely and quickly, so we focus on being able to execute several functions in a single Node.js process that is capable of handling thousands of functions at the same time, executed in an isolated environment, safe and fast.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://fleetfn.com/docs/how-it-works.html" rel="noopener noreferrer"&gt;&lt;strong&gt;Isolated&lt;/strong&gt;&lt;/a&gt; Able to perform a function with isolated memory and allow them to use CPU according to the provisioned limits.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://fleetfn.com/docs/security.html" rel="noopener noreferrer"&gt;&lt;strong&gt;Safe&lt;/strong&gt;&lt;/a&gt; In the same instance, one function is not able to observe the other or obtain resources from other functions (such as information from process.env, context, requests...), this also includes access to the File System.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://fleetfn.com/docs/how-it-works.html" rel="noopener noreferrer"&gt;&lt;strong&gt;Fast&lt;/strong&gt;&lt;/a&gt; We eliminated the steps "Start new VM" and "Setup runtime", the source code, is available in each region where the function is available, close to the execution time. We were able to execute the functions faster within the same process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means that we can run Node.js functions much faster than other platforms and the functions consume an order of magnitude less memory while maintaining &lt;a href="https://fleetfn.com/docs/security.html" rel="noopener noreferrer"&gt;security&lt;/a&gt; and an isolated environment.&lt;/p&gt;

&lt;p&gt;To impose a &lt;a href="https://fleetfn.com/docs/security.html" rel="noopener noreferrer"&gt;safe environment&lt;/a&gt;, Fleet had to &lt;a href="https://fleetfn.com/docs/limits.html" rel="noopener noreferrer"&gt;limit some Node.js APIs&lt;/a&gt; to increase security and prevent suspicious functions from having access to resources, each running function only has access to resources that have been granted to it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Scaling
&lt;/h1&gt;

&lt;p&gt;One of the main differentials of Fleet is also how we can scale its Node.js functions. Unlike other platforms that scale their function only via concurrency, that is, each instance of a VM can only handle one invocation at a time, if it is busy it will provision a new instance but there is a limit for that, normally 1000 instances in concurrency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffleetfn.com%2Fimages%2Fblog%2Fhole-technology%25402x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffleetfn.com%2Fimages%2Fblog%2Fhole-technology%25402x.png" alt="Fleet scales functions according to the asynchronous limit" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://fleetfn.com/docs/scaling.html" rel="noopener noreferrer"&gt;Differently in Fleet&lt;/a&gt;, we have managed that its function can handle many asynchronous requests at a time within a configured limit, if this limit is reached for some time a new instance is provisioned for its function in just a few ms. This means that during the time that your function is running it can handle many requests and take advantage of the connection established with your database during several requests.&lt;/p&gt;

&lt;p&gt;In Fleet there is no clear concurrency limit, it is dynamic by region. We do everything to handle the maximum number of requests, you have control over the asynchronous limit so you can multiply the number of requests that your application can handle.&lt;/p&gt;

&lt;h1&gt;
  
  
  HTTP Rest
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9bizerx6prehpau1qkc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F9bizerx6prehpau1qkc5.png" alt="Deployment overview" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fleet Functions are invoked via HTTP Rest, there is no need for an extra API Gateway service, each new deployment Fleet generates a new URL for preview deployment (in &lt;code&gt;&amp;lt;uid&amp;gt;-&amp;lt;project-name&amp;gt;.runfleet.io&lt;/code&gt; ) and with an option, you can define the deployment for production with an exclusive subdomain in &lt;code&gt;&amp;lt;project-name&amp;gt;.runfleet.io&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;All deployments are made on a project created on &lt;a href="https://console.fleetfn.com" rel="noopener noreferrer"&gt;console.fleetfn.com&lt;/a&gt;, capable of inviting members to teams with privileges... that's a subject for another article.&lt;/p&gt;

&lt;p&gt;You may want to &lt;a href="https://fleetfn.com/docs/deployments.html" rel="noopener noreferrer"&gt;read more about it here&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Use cases
&lt;/h1&gt;

&lt;p&gt;Fleet is built to run Node.js functions much faster and will soon run functions in other languages ​​using WebAssembly. With that in mind, Fleet does not deal with container provisioning like Cloud Run or allows you to create your own custom runtime environment.&lt;/p&gt;

&lt;p&gt;Can handle APIs of your applications very well able to meet the high demand and save with low demand.&lt;/p&gt;

&lt;p&gt;Fleet can handle microservices, calls between functions, and traffic changes with great confidence. We are working on what we call the &lt;a href="https://fleetfn.com/vpf" rel="noopener noreferrer"&gt;Virtual Private Function or VPF&lt;/a&gt; which is a network of private functions, this isolates the functions inside the VPF from the outside world, allows only some of the functions inside the VPF to be invoked by the outside world, it also allows better monitoring and sharing between VPFs, in the future, we also want to allow you to securely connect your current network to the VPF network. In addition, we are working on &lt;a href="https://fleetfn.com/traffic-shifting" rel="noopener noreferrer"&gt;Traffic Shifting&lt;/a&gt; is our service capable of making canary deployments using a set of rules based on data, you define an autonomous set of rules to increase the reliability of the traffic change to perform the split, for example, a certain amount of successful or failed requests can increase the traffic percentage for a specific deployment. This is for services that are sensitive to problems with code or when testing new features.&lt;/p&gt;

&lt;p&gt;Although the focus of Fleet is not on website hosting, you can also handle &lt;a href="https://github.com/fleetfn/examples/tree/master/react-ssr" rel="noopener noreferrer"&gt;server-side rendering with React&lt;/a&gt;, deploy the static files to an S3 and use the functions to routing.&lt;/p&gt;




&lt;p&gt;I invite you to visit our &lt;a href="https://fleetfn.com/" rel="noopener noreferrer"&gt;website&lt;/a&gt;, our &lt;a href="https://fleetfn.com/docs/get-started.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; and the &lt;a href="https://github.com/fleetfn/examples" rel="noopener noreferrer"&gt;examples repository&lt;/a&gt;, feel free to explore, if that interests you and you are curious to test it we are in the private beta phase, with some people already testing, we send invitations every week. To register is very easy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to &lt;a href="https://console.fleetfn.com" rel="noopener noreferrer"&gt;console.fleetfn.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Continue with Github and hope that you will soon receive an email&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to prioritize your email in the early access list, you can fill out our &lt;a href="https://fleethq.typeform.com/to/I3p8md" rel="noopener noreferrer"&gt;quick questionnaire&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We are every week publishing our weekly changelog, you can follow closely on our &lt;a href="https://twitter.com/fleetfn" rel="noopener noreferrer"&gt;twitter @fleetfn&lt;/a&gt; which includes some short videos of the main resources and we always publish on our specific page for the changelog with a more detailed description &lt;a href="https://fleetfn.com/changelog" rel="noopener noreferrer"&gt;fleetfn.com/changelog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>node</category>
      <category>javascript</category>
      <category>faas</category>
    </item>
    <item>
      <title>Design System: Compositional philosophy of components</title>
      <dc:creator>Matuzalém Teles</dc:creator>
      <pubDate>Sat, 22 Feb 2020 20:14:02 +0000</pubDate>
      <link>https://dev.to/matuzalemsteles/design-system-compositional-philosophy-of-components-1cc4</link>
      <guid>https://dev.to/matuzalemsteles/design-system-compositional-philosophy-of-components-1cc4</guid>
      <description>&lt;p&gt;Products evolve quickly within a large organization, companies need to move fast, build consistently, deliver new products and maintain existing ones. As part of all this, the solution adopted is to build a Design System, rooted in the principles of common patterns, colors, typography and grid.&lt;/p&gt;

&lt;p&gt;The great challenge of a team that deals with the materialization of the design system into components is how to expose the fast pace of a company and continue to deliver value to the components for the product teams. An organization's developers want to go beyond implementation because products evolve, but some of them just want to follow the implementation.&lt;/p&gt;

&lt;p&gt;There is a big challenge in this environment, the Design System team on the Design side can take different approaches, arrest Design to specific component cases or create just the foundation (e.g Colors, Typography, spacing, Grid, Layouts...) or meet both cases. There are disadvantages and advantages in each case and it is up to you to understand how each case can work better in the context of your organization.&lt;/p&gt;

&lt;p&gt;On the other hand, developers of the component library can take different approaches:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create components providing only the cases of the Design System, restricting the use of the component to cases other than the one defined.&lt;/li&gt;
&lt;li&gt;Create components with high flexibility, allowing developers to deviate from the defined cases when the product design thinks beyond what is defined.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The result of this can be bad on both sides, we can frustrate the developers because they may have to create their own component or they will have to do a lot of work with the flexible components to arrive at the specific case of the design created by the designer of his team and the Design System can block the creative mind of the designer because the component definitions are fixed.&lt;/p&gt;

&lt;p&gt;Correcting and dealing with this is complex, but what should we do? in our company (&lt;a href="http://liferay.com" rel="noopener noreferrer"&gt;Liferay&lt;/a&gt;) in the past we have followed the approach of fixed components to the Design System and not allowing developers to go far beyond what is expected, in a company context with more than 300 engineers and several product teams, this was a bad decision, which resulted in the low adoption of components, for some reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The components were too attached to the Design system&lt;/li&gt;
&lt;li&gt;Little flexibility&lt;/li&gt;
&lt;li&gt;Designers created components beyond implementation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As a result, our components had large APIs, with low usage, high configuration complexity, increasing maintenance costs and entering the depreciation phase very quickly.&lt;/p&gt;

&lt;p&gt;We know this was a bad decision and we quickly switched to another approach the following year. We took the approach of achieving a balance between flexibility and specialized components in our component library.&lt;/p&gt;

&lt;p&gt;Dealing with this may seem easier, but how do we materialize the idea? We follow a hybrid approach to our components, we call this the &lt;strong&gt;Multi-Layered API library&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Layered API library
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45wx3u7cxgoieff5q3z1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45wx3u7cxgoieff5q3z1.png" width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Multi-Layered components mean that we have two ways to provide a component:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;low-level&lt;/strong&gt; - Basic building blocks to provide flexibility so that you can customize and create high level components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;high-level&lt;/strong&gt; - Highly specific component that tend to cover only specific use cases, limiting their flexibility.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The principles are pretty basic but to be called you need to follow some laws.&lt;/p&gt;

&lt;h4&gt;
  
  
  Low-level
&lt;/h4&gt;

&lt;p&gt;Low-level components follow the composition, such as small blocks that build a DropDown component.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ClayDropDown&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ClayDropDown&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Action&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ClayDropDown&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Item&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ClayDropDown&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ItemList&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ClayDropDown&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Search&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  High-level
&lt;/h4&gt;

&lt;p&gt;High-level components may also follow composition but may be more specific components with something in common among many teams.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ClayButtonWithIcon&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ClayCardWithHorizontal&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ClayCardWithNavigation&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ClayDropDownWithItems&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;High-level components are built with low-levels, this may decrease maintenance but increase the surface of available APIs.&lt;/p&gt;

&lt;p&gt;The benefit of this is that you can come up with a hybrid approach that reaches more adoption and many teams with different tastes.&lt;/p&gt;

&lt;p&gt;You can read more about our &lt;a href="https://clayui.com/docs/foundations/composing.html" rel="noopener noreferrer"&gt;composition approach in our documentation in our component library&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The result of this approach was the high adoption of our components in different teams and products with different contexts, helping teams to deliver faster and they are happier.&lt;/p&gt;

&lt;p&gt;This seems to solve the problems at the user level but we get involved in several discussions on how to differentiate, build and structure the low-level and high-level components. I have separated some of my thoughts on this from trying to follow a theory or something conceptual and adjusting things over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tail theory
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Do not confuse this with The Long Tail Effect theory.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Tail theory is a rope anology with two ends or tails, where you place both types of components, low-level and high-level, at each end. The distance between them can cause great pain or great successes is all or nothing here!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9h2ddgdlaf6j4czc47s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs9h2ddgdlaf6j4czc47s.png" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extremes can be very painful or very straightforward, meaning that high-levels attached to specific use cases can bring happiness to a team that is following the definition correctly and can create a lot of pain for those who are not.&lt;/li&gt;
&lt;li&gt;For those in pain, the pain gets bigger because the low-level is at the other end, building from low-level to something close to high-level can be painful.&lt;/li&gt;
&lt;li&gt;Extreme high-level cases may be of little adoption as their cases are specific and do not allow any change outside of the specified.&lt;/li&gt;
&lt;li&gt;Low-levels tend to have a long life because they are more flexible but naturally require more work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More stuck components tend to change more over time and their life cycle tends to be shorter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhy57z7xui2p6bn13le0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhy57z7xui2p6bn13le0.png" width="800" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This graph is hypothetical, real data has not been used here, but is based on my experiences over time working with component library.&lt;/p&gt;

&lt;p&gt;Some peculiar things: we may have low-level working very well in the short and long term and having few changes, that would be the ideal scenario for us but in the middle there is one thing we can lose, the effort and development experience: these are points keys for people to adopt library components and to build without much effort.&lt;/p&gt;

&lt;p&gt;Very specific components can change a lot over time and in a short period of time and it may happen that at some point we will have to depreciate why the component has swelled, this can happen with any component but we will have maintenance issues and a constant fight to update things before people can start using them. We can extend the life of these components and decrease maintenance so we can worry about improving or building things beyond the components.&lt;/p&gt;

&lt;p&gt;So imagine that if I push the component closer and closer to the middle of the rope and the distance between the sides decreases, it means that we reduce the pain in the sides but getting closer will not have a clear difference, it creates confusion. Each time we give some flexibility to the high levels we push them to the middle of the rope, the experience gets better and the pain can decrease.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88fvzpeop18wxas1tf7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88fvzpeop18wxas1tf7f.png" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that we do not want to join the two sides but we want to get close, the tail is the extreme and the extreme has a price, we just want to distance it and we need to offer some flexibility for the high-level components and decrease the flexibility for the low-level.&lt;/p&gt;

&lt;p&gt;With that in mind, we can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increase the longevity of high-level components.&lt;/li&gt;
&lt;li&gt;Fewer changes over time&lt;/li&gt;
&lt;li&gt;As a result, we support more use cases&lt;/li&gt;
&lt;li&gt;People are happier&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F903k9f92glvd9w7uyiip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F903k9f92glvd9w7uyiip.png" width="800" height="615"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although the greatest benefit falls on high-level, low-level is influenced because once we take away some of its natural flexibility, it slightly increases the amount of change over time and the maintenance over it also increases but this is necessary since we have to create a balance and the disparity between the two is not stark.&lt;/p&gt;

&lt;p&gt;I believe it is easier to stick to this theory. Once we understand, it will be natural to identify when a component needs more flexibility or when we need to maintain the API.&lt;/p&gt;

&lt;p&gt;Our Liferay component library is Open Source and you can access it through Github:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Github: &lt;a href="http://github.com/liferay/clay" rel="noopener noreferrer"&gt;http://github.com/liferay/clay&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Site: &lt;a href="http://clayui.com" rel="noopener noreferrer"&gt;http://clayui.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Design System: &lt;a href="https://liferay.design/lexicon/" rel="noopener noreferrer"&gt;https://liferay.design/lexicon/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have been working on this for 2 ½ years and I will be very happy to hear your thoughts and experiences.&lt;/p&gt;

&lt;p&gt;Our Github repository is full of very interesting thoughts and speeches. Explore our issues and PR's 🙂.&lt;/p&gt;

&lt;p&gt;Follow + Say Hi! 👋 &lt;a href="https://twitter.com/MatuzalemTeles" rel="noopener noreferrer"&gt;Connect with me on Twitter 🐦&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>react</category>
      <category>javascript</category>
      <category>css</category>
    </item>
    <item>
      <title>Introducing Hole a new serverless technology for Node.js</title>
      <dc:creator>Matuzalém Teles</dc:creator>
      <pubDate>Sun, 16 Feb 2020 05:06:02 +0000</pubDate>
      <link>https://dev.to/matuzalemsteles/introducing-hole-a-new-serverless-technology-for-node-js-5b4j</link>
      <guid>https://dev.to/matuzalemsteles/introducing-hole-a-new-serverless-technology-for-node-js-5b4j</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhole.build%2Fimages%2Fblog%2Fintroducing-hole%402x.png" class="article-body-image-wrapper"&gt;&lt;img alt="Introducing New Serverless Technology for Node.js" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhole.build%2Fimages%2Fblog%2Fintroducing-hole%402x.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Publication originally posted at &lt;a href="https://hole.build/blog/2020/02/16/introducing-hole-serverless.html" rel="noopener noreferrer"&gt;https://hole.build/blog/2020/02/16/introducing-hole-serverless.html&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Introducing the new generation of serverless technology for &lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt;, efficient and cold start to zero with &lt;a href="https://hole.build" rel="noopener noreferrer"&gt;hole.build&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Managing servers with a complex infrastructure a professional team focused only on monitoring and consuming several hours to decide how to scale and support large peaks in access and use of the application's APIs, has always been the problem to deal with high availability and fast growth projects.&lt;/p&gt;

&lt;p&gt;Over time, several technologies and standards have been created to deal with this and they are one of the most critical things of a product when not well thought out and orchestrated can be a big headache for the rapid growth of a company but it requires professional demand qualified staff for monitoring and security. At the beginning of a small startup this can be a big cost because they need to grow quickly, they need to worry about their product, validate, win over their first customers and start selling.&lt;/p&gt;

&lt;p&gt;Maintaining an infrastructure, monitoring and servers team to maintain the product can be very expensive, paying for services that are not used or are idle in times of low access can impact the company's balance sheet.&lt;/p&gt;

&lt;p&gt;A few years ago, the &lt;a href="https://en.wikipedia.org/wiki/Serverless_computing" rel="noopener noreferrer"&gt;"serverless"&lt;/a&gt; (FaaS) movement and technologies began to emerge, with a view to solving these types of problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;auto-scaling&lt;/strong&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scaling to down&lt;/strong&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;zero servers&lt;/strong&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;without complex infrastructure&lt;/strong&gt;,&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pay only for resources when used&lt;/strong&gt;,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is beautiful and it looks like the best of all worlds to start building the product on top of that, &lt;strong&gt;but with that it came with a main disadvantage: performance&lt;/strong&gt;, functions that are not executed with high frequency can suffer a higher response latency than the code being runs continuously on a server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhole.build%2Fimages%2Fblog%2Fother-platforms%402x.png" class="article-body-image-wrapper"&gt;&lt;img alt="Provisioning of the functions from other serverless platforms" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhole.build%2Fimages%2Fblog%2Fother-platforms%402x.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On other serverless platforms, when a request arrives, the platform provisions a container with its function runtime, there is a waiting time until its function can actually start to be executed and process the request, called a cold start. Your container is kept on "hold" for some time so that can process another request but without cold start, when new requests arrive and some of these containers are already occupied, other containers will have to be sized to process the new requests starting with the cold start.&lt;/p&gt;

&lt;p&gt;At Hole we built our technology to solve some of the main problems of serverless: &lt;strong&gt;performance&lt;/strong&gt;, &lt;strong&gt;security&lt;/strong&gt;, &lt;strong&gt;monitoring&lt;/strong&gt; and &lt;strong&gt;debuging&lt;/strong&gt;. Our functions are executed with cold starts to almost zero, we limit and add more layers of security in the environments of execution of the function, we show metrics of requests made successfully and failed in more details and insights on the performance of your code. In addition to improving serverless technology, we are very concerned with the experience of using the technology, console, design and friendly docs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhole.build%2Fimages%2Fblog%2Fhole-technology%402x.png" class="article-body-image-wrapper"&gt;&lt;img alt="Asynchronous requests for Hole functions" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fhole.build%2Fimages%2Fblog%2Fhole-technology%402x.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our functions can be performed with cold starts to almost zero and functions can be configured to handle more than one asynchronous request increasing the limits of provisioning your function. You can &lt;a href="https://hole.build/docs/how-it-works.html" rel="noopener noreferrer"&gt;read more about how our technology works in our documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;These are some crucial points that we are attacking but we want to improve even more how companies interact and work with serverless technologies is just the beginning, and we have many things that we want to show. It will be a long journey and we are excited to share our learnings and thoughts as we progress.&lt;/p&gt;

&lt;p&gt;Today, we are starting to accept teams and companies for our private alpha. If you are interested in joining early and influencing the direction of Hole, &lt;a href="https://hole.build" rel="noopener noreferrer"&gt;sign up here&lt;/a&gt; and &lt;a href="https://twitter.com/holehq" rel="noopener noreferrer"&gt;follow us on Twitter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>cloud</category>
      <category>node</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
