<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Robert Gibb</title>
    <description>The latest articles on DEV Community by Robert Gibb (@gibbiv).</description>
    <link>https://dev.to/gibbiv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gibbiv"/>
    <language>en</language>
    <item>
      <title>Building Our E-Commerce Platform with Serverless FaaS</title>
      <dc:creator>Robert Gibb</dc:creator>
      <pubDate>Sun, 26 Sep 2021 22:57:33 +0000</pubDate>
      <link>https://dev.to/fabric_commerce/building-our-e-commerce-platform-with-serverless-faas-fcl</link>
      <guid>https://dev.to/fabric_commerce/building-our-e-commerce-platform-with-serverless-faas-fcl</guid>
      <description>&lt;p&gt;&lt;em&gt;This post was published here on behalf of &lt;a href="https://www.linkedin.com/in/devashish90/" rel="noopener noreferrer"&gt;Devashish Pandey&lt;/a&gt;, a lead software development engineer at &lt;a href="https://fabric.inc" rel="noopener noreferrer"&gt;fabric&lt;/a&gt; who previously worked on platform and API development at SecureDB and StormDB.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With so many "something-as-a-service" options at our disposal for infrastructure, in 2017 we opted for &lt;a href="https://www.cloudflare.com/learning/serverless/glossary/function-as-a-service-faas/" rel="noopener noreferrer"&gt;function-as-a-service (FaaS)&lt;/a&gt; to build our platform of e-commerce services. Our mission was, and remains, to build for e-commerce services what AWS built for web services, or what we call the commerce fabric of the internet.&lt;/p&gt;

&lt;p&gt;Building with FaaS and only paying for function execution time rather than constantly running servers (i.e. &lt;a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/" rel="noopener noreferrer"&gt;serverless&lt;/a&gt;) allowed us to serve customers early on without incurring high infrastructure costs. These customers were large, household names and referrals from our founding team who had &lt;a href="https://resources.fabric.inc/blog/answers/staples-b2b-commerce" rel="noopener noreferrer"&gt;transformed digital at Staples&lt;/a&gt;. Part of this transformation involved moving from the monolithic e-commerce platform IBM Websphere to a &lt;a href="https://resources.fabric.inc/blog/answers/ecommerce-microservices-architecture" rel="noopener noreferrer"&gt;service-oriented architecture&lt;/a&gt; with open-source and custom-built software.&lt;/p&gt;

&lt;p&gt;E-commerce veterans who knew our founding team wanted to transform digital for their companies, which led us to build our platform of modular commerce services in 2017. Using serverless FaaS with &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt; supported these services reliably and efficiently, and it continued to support them through 2020 when we raised &lt;a href="https://news.crunchbase.com/news/next-top-brand-fabric-closes-9-5m-seed-for-e-commerce-platform/" rel="noopener noreferrer"&gt;our $9.5M seed round&lt;/a&gt; and attracted customers outside of our circle of friends.&lt;/p&gt;

&lt;p&gt;But as we raise more money (most recently &lt;a href="https://www.bloomberg.com/news/articles/2021-07-20/e-commerce-tech-startup-fabric-raises-100-million-in-new-round" rel="noopener noreferrer"&gt;our $100M Series B&lt;/a&gt;) and onboard global brands &lt;a href="https://www.capterra.com/p/215448/Fabric/#reviews" rel="noopener noreferrer"&gt;like GNC&lt;/a&gt;, life on the serverless backend is not as easy as it once was. As a result, some challenges that were inconsequential at the time are now more pronounced. In this post, I will explain how we direct these serverless challenges and show you how serverless FaaS powers &lt;a href="https://fabric.inc/" rel="noopener noreferrer"&gt;our platform of commerce services&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Serverless Architecture
&lt;/h2&gt;

&lt;p&gt;The current iteration of the fabric platform can be hosted in either AWS, GCP, Azure, and even Knative since all of these are supported by the &lt;a href="https://www.serverless.com/" rel="noopener noreferrer"&gt;serverless framework&lt;/a&gt; we use. However, in 2017 when we adopted serverless, AWS had the &lt;a href="https://www.veritis.com/blog/cloud-computing-maturity-model/" rel="noopener noreferrer"&gt;most mature&lt;/a&gt; offering so we made the simple choice in choosing Lambda for FaaS.&lt;/p&gt;

&lt;p&gt;Flash forward to today and we still use Lambda and sparingly route requests through &lt;a href="https://aws.amazon.com/fargate/" rel="noopener noreferrer"&gt;AWS Fargate&lt;/a&gt;, a container-as-a-service (CaaS) environment. Fargate is mainly used when we can't meet latency service level agreements (SLAs) with Lambda due to cold start problems. However, since Lambda powers 95% of all processes within our platform, we've found ways to direct these issues which I'll talk about below.&lt;/p&gt;

&lt;p&gt;As for the way our products are structured, a data store layer is used, followed by a business layer in which most logic is programmed as you would with any other backend scenario. Lambda orchestrates event-driven workflows at the business and data storage layers.&lt;/p&gt;

&lt;p&gt;The actual product offerings consist of commerce applications like a &lt;a href="https://fabric.inc/pim" rel="noopener noreferrer"&gt;product information manager (PIM)&lt;/a&gt; and &lt;a href="https://fabric.inc/oms" rel="noopener noreferrer"&gt;order management system (OMS)&lt;/a&gt;, referred to as Co-Pilot Apps in the diagram below. (Co-Pilot is the name of the UI that merchandisers, marketers, and other business users interact with.) We also offer &lt;a href="https://fabric.inc/storefront" rel="noopener noreferrer"&gt;e-commerce storefronts&lt;/a&gt; that fast-track storefront development. Each type of offering has a separate yet connected business layer. &lt;/p&gt;

&lt;p&gt;On the top level, each layer is supported by Lambda in a serverless manner. API gateway routes interface between them, and the logic layer and external services layer interface through integrations.&lt;/p&gt;

&lt;p&gt;Every action, process module, and element of the platform is isolated into standalone functions in Lambda. The resulting layout achieves full separation of concern, with a single gateway route connecting with a single layer action/process at any given time. This cancels out the most common issues with resource sharing and allows each action/module to scale independently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fresources.fabric.inc%2Fhs-fs%2Fhubfs%2Fcopilot-apps-1.png%3Fwidth%3D1490%26name%3Dcopilot-apps-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fresources.fabric.inc%2Fhs-fs%2Fhubfs%2Fcopilot-apps-1.png%3Fwidth%3D1490%26name%3Dcopilot-apps-1.png" alt="copilot-apps-1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Serverless
&lt;/h2&gt;

&lt;p&gt;There are numerous tools engineers can use to manage serverless setups of the type I've discussed thus far. However, mainly because it was one of the earlier market representatives, we decided to use &lt;a href="https://www.serverless.com/" rel="noopener noreferrer"&gt;the Serverless Framework&lt;/a&gt; to manage serverless infrastructure through &lt;a href="https://serverless-stack.com/chapters/what-is-infrastructure-as-code.html" rel="noopener noreferrer"&gt;infrastructure as code (IaC)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Adopting the Serverless Framework as our infrastructure modeling and standardization tool enables quick spring-ups of environments. Further, the vendor-agnostic nature of the framework leaves room for any migration toward a cloud-agnostic, client-requested offering down the road. Using the Serverless Framework, we could support this offering while maintaining the same variables for mushroomed fabric platform instances.&lt;/p&gt;

&lt;p&gt;In addition to these benefits, we found getting started with the Serverless Framework incredibly easy, even for a robust platform of applications &lt;a href="https://api.fabric.inc/" rel="noopener noreferrer"&gt;and APIs&lt;/a&gt; like ours. If you want to try it yourself, just launch your terminal and install the &lt;a href="https://www.npmjs.com/package/serverless" rel="noopener noreferrer"&gt;node package manager&lt;/a&gt;. After that, a simple serverless command sets your environment up for configuration and management through the &lt;a href="https://www.serverless.com/framework/docs/providers/aws/cli-reference/config" rel="noopener noreferrer"&gt;framework's CLI&lt;/a&gt;. A serverless.yml file is then created, which is where it all started for us in 2017!&lt;/p&gt;

&lt;p&gt;Below is what a typical serverless.yml file configuration looks like using the Serverless Framework. The scripting language used is proprietary serverless.js, specific for a node.js app running on a MongoDB data store layer. This setup is similar to our own.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fresources.fabric.inc%2Fhs-fs%2Fhubfs%2Fserverless.yml-file-configuration-1.png%3Fwidth%3D1532%26name%3Dserverless.yml-file-configuration-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fresources.fabric.inc%2Fhs-fs%2Fhubfs%2Fserverless.yml-file-configuration-1.png%3Fwidth%3D1532%26name%3Dserverless.yml-file-configuration-1.png" alt="serverless.yml-file-configuration-1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The provider section of the config file allows for the full exploration of cloud services providers. However, we continue to use AWS as our go-to provider as we've found that other providers cause capacity bottlenecks as we continue supporting &lt;a href="https://resources.fabric.inc/blog/ecommerce-api-security" rel="noopener noreferrer"&gt;more of the world's largest retailers&lt;/a&gt; and brands. After AWS is selected, each function across the implementation is associated with a handler and specific API gateways. &lt;/p&gt;

&lt;p&gt;Using this template has dramatically improved the accuracy and speed of infrastructure configurations as we scale with serverless.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Challenges 
&lt;/h2&gt;

&lt;p&gt;As good as the story sounds so far, we have experienced our fair share of challenges with serverless and continue to experiment with ways to maintain business efficiencies while improving capacity and performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Capacity
&lt;/h3&gt;

&lt;p&gt;While our current setup has enough capacity to support existing customers, there's always concern about having enough serverless infrastructure to support more, larger customers---especially during &lt;a href="https://www.cnbc.com/2019/12/25/reuters-america-corrected-record-online-sales-give-u-s-holiday-shopping-season-a-boost-report.html" rel="noopener noreferrer"&gt;holiday shopping bursts&lt;/a&gt;. To get ahead of this, we are experimenting with augmenting Lambda with &lt;a href="https://aws.amazon.com/eks/" rel="noopener noreferrer"&gt;Amazon EKS&lt;/a&gt; using serverless containers with AWS Fargate. &lt;/p&gt;

&lt;p&gt;Cloud-agnostic support is an option we are exploring as well. But before allowing customers to dictate their cloud provider, we need to better understand capacity with GCP and Azure. With customers calling the shots for infrastructure, capacity issues could also arise with AWS. For instance, if a customer selects an AWS region of deployment outside USA East and West, &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html" rel="noopener noreferrer"&gt;burst concurrency diminishes&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance
&lt;/h3&gt;

&lt;p&gt;What makes serverless efficient for your business can cause inefficiencies with application performance, particularly when using Lambda functions. These inefficiencies occur during the cold start duration when Lambda is downloading your code and starting a new execution environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fresources.fabric.inc%2Fhs-fs%2Fhubfs%2Flambda-performance.png%3Fwidth%3D855%26name%3Dlambda-performance.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fresources.fabric.inc%2Fhs-fs%2Fhubfs%2Flambda-performance.png%3Fwidth%3D855%26name%3Dlambda-performance.png" alt="lambda-performance"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To reduce inefficiencies, we have tested and implemented &lt;a href="https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/" rel="noopener noreferrer"&gt;several performance optimizations&lt;/a&gt;. One workaround we use for avoiding cold starts is periodic warmup events that counteract environment timeouts. We are also exploring &lt;a href="https://aws.amazon.com/lambda/edge/#:~:text=Lambda%40Edge%20is%20a%20feature,improves%20performance%20and%20reduces%20latency.&amp;amp;text=With%20Lambda%40Edge%2C%20you%20can,all%20with%20zero%20server%20administration." rel="noopener noreferrer"&gt;Lambda@Edge&lt;/a&gt; (&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html" rel="noopener noreferrer"&gt;Lambda + Amazon Cloudfront&lt;/a&gt;) to offset any latency caused by Lambda. However, while doing this, we are trying not to defeat the cost-benefit variable for which Lambda was chosen in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stick with Serverless?
&lt;/h2&gt;

&lt;p&gt;Despite these challenges, we remain loyal to serverless infrastructure and putting product development ahead of server management. As one of the fastest-growing tech startups in e-commerce this year, serverless has served us well in supporting new customers while enabling fast iterations of new commerce services like &lt;a href="https://fabric.inc/subscriptions" rel="noopener noreferrer"&gt;Subscriptions&lt;/a&gt; and &lt;a href="https://fabric.inc/member" rel="noopener noreferrer"&gt;Loyalty Management&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;While AWS Lambda is not yet perfect, we have noticed that, as fabric grows, so too does Lambda in terms of capacity. This gives us breathing room to test iterations of Lambda and other serverless offerings from AWS while continuing conversations and experimentation around matching the right architecture with the right infrastructure.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>architecture</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How Event Driven Systems Work in Commerce</title>
      <dc:creator>Robert Gibb</dc:creator>
      <pubDate>Tue, 20 Oct 2020 13:46:50 +0000</pubDate>
      <link>https://dev.to/fabric_commerce/how-event-driven-systems-work-in-commerce-2a54</link>
      <guid>https://dev.to/fabric_commerce/how-event-driven-systems-work-in-commerce-2a54</guid>
      <description>&lt;p&gt;&lt;strong&gt;Author's Note:&lt;/strong&gt; This post was created in collaboration with &lt;a href="https://dev.to/mcsh"&gt;Sajjad Heydari&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Assume someone is waiting for a package to be delivered to their house. The impatient customer keeps looking out their window, waiting for the delivery truck to pull up. The patient customer keeps doing other things until they hear the doorbell ring. This everyday example can easily be transformed to the computer science domain where two types of systems exist: a polling system and an event driven system.&lt;/p&gt;

&lt;p&gt;The polling system acts like the impatient customer. In a commerce-related scenario, it keeps polling the system for new updates such as orders and payment authorizations while the event driven system relies on asynchronous event handlers to notify it of updates in the system.&lt;/p&gt;

&lt;p&gt;The event driven system is easier to develop and more efficient, but it requires special infrastructure to work. The good news is that some software and platforms &lt;a href="https://fabric.inc/"&gt;like Fabric&lt;/a&gt; support event driven systems. In commerce, these systems create more streamlined operations so customers get their orders faster.&lt;/p&gt;

&lt;p&gt;In this post, we will define what events are and how retailers can leverage them to create a system that is more efficient. But, before doing that, let’s look at an example of how an event-driven system might support a retailer and its customers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example of an Event Driven System
&lt;/h2&gt;

&lt;p&gt;After adding multiple items to his cart, Joe goes to the checkout page on a retailer’s website. He fills in his address and payment information, then places his order. But what happens next?&lt;/p&gt;

&lt;p&gt;With a polling system, Joe's order is stored in a database. Some time later, a script or person queries the database for any new orders and processes the payment. Another script or person in the warehouse then queries the database and starts the delivery procedure.&lt;/p&gt;

&lt;p&gt;This way of handling payment and fulfillment is slow and prone to error. Now more than ever, handling events in real time is important in commerce given the &lt;a href="https://www.cnbc.com/2019/04/02/online-shopping-officially-overtakes-brick-and-mortar-retail-for-the-first-time-ever.html"&gt;rise in online ordering&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Polling a system periodically can create a bottleneck in operations, especially if there is an unexpected spike in order activity. Furthermore, querying a database when no new orders have been placed results in unnecessary infrastructure costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In an event-driven system, here is how Joe's order is processed:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Joe's order is stored in a database like before, but this time an event is emitted to an event router.&lt;/li&gt;
&lt;li&gt; The event router looks up what that can handle this type of event and discovers that the payment processing script is responsible for it.&lt;/li&gt;
&lt;li&gt; The payment processing script calls the payment processor (e.g. Stripe) with the appropriate payment data.&lt;/li&gt;
&lt;li&gt; The payment processing script emits a payment authorized event. This event initiates an order confirmation email to Joe and a notification to the warehouse for fulfillment.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Parts of an Event Driven System
&lt;/h2&gt;

&lt;h4&gt;
  
  
  The Event
&lt;/h4&gt;

&lt;p&gt;Event based systems in computer science go back to the 1950s where they were first designed to handle asynchronous events such as &lt;a href="https://en.wikipedia.org/wiki/Input/output"&gt;Input/Output&lt;/a&gt;). &lt;a href="https://en.wikipedia.org/wiki/Edsger_W._Dijkstra"&gt;Edsger Wybe Dijkstra&lt;/a&gt; designed the first interrupt handler on the &lt;a href="https://en.wikipedia.org/wiki/Electrologica_X1"&gt;Electrologica X-1&lt;/a&gt; that is still used in modern systems to this day. The modern day's events are the same concept, only they are used in different environments.&lt;/p&gt;

&lt;p&gt;When a CPU interrupt emerges, the system changes its behavior to address the incoming interrupt. This could be a new keystroke pressed or a notification that this process is about to be terminated. The interrupt doesn't tell the process what to do, only that some event has happened. It is the process's duty to handle the interrupt and decide what to do with it.&lt;/p&gt;

&lt;p&gt;The same thing happens with events in event driven systems. When an event is created, the event itself shouldn't decide what happens; it should only describe what has happened, be it a change in state or an update. For instance, in a modern commerce tech stack, the event router may send an abandoned cart event to an email platform like Klaviyo that decides to enroll Joe in an abandoned cart email sequence.&lt;/p&gt;

&lt;h4&gt;
  
  
  The Event Router
&lt;/h4&gt;

&lt;p&gt;How an event is structured and emitted relies heavily on the event router. There are many different types of event routers, including &lt;a href="https://aws.amazon.com/eventbridge/"&gt;AWS EventBridge&lt;/a&gt;, &lt;a href="https://kafka.apache.org/"&gt;Apache Kafka&lt;/a&gt;, and &lt;a href="https://azure.microsoft.com/en-us/services/event-hubs/"&gt;Microsoft Event Hub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Events are JSON documents that follow a specific scheme. The top-level fields are the same across all event types, but the detail of each event is application dependent.&lt;/p&gt;

&lt;p&gt;For example, if Joe successfully ordered an item from a retailer, his "order created" event would look something like this...&lt;/p&gt;

&lt;pre&gt;{
  "cartId": "314919ab8aed637123f804eb",
  "orderCurrency": "USD",
  "orderTotal": 27.64,
  "orderId": "314919ab8aed637123f804eb",
  "shipTo": [
    {
      "address": {
      "name": {
        "first": "John",
        "last": "Smith"
      },
      "phone": {
        "number": "5555555555",
        "kind": "mobile"
      },
      "email": "user1@gmail.com",
      "street1": "1250 main street",
      "street2": "",
      "city": "HAVRE DE GRACE",
      "state": "MD",
      "country": "US",
      "zipCode": "21078-3213",
      "kind": "shipping"
      },
      "shipmentCarrier": "FedEx"
    }
  ],
  "items": [
    {
      "itemId": 4912,
      "price": 122.43,
      "quantity": 1,
      "total": 122.43
    },
    {
      "itemId": 8123,
      "price": 21.99,
      "quantity": 3,
      "total": 65.97
    }
  ]
}&lt;/pre&gt;

&lt;p&gt;The most straightforward way to handle events in a custom way is by using a function-as-a-service (FaaS) solution like &lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda&lt;/a&gt; that natively supports Java, Go, Powershell, Node.js, C#, Python, and Ruby, but provides a Runtime API to allow usage of any other programming language.&lt;/p&gt;

&lt;p&gt;Developing Lambda functions is easier and cheaper to operate since AWS only charges for the time that they are running. This is known as &lt;a href="https://aws.amazon.com/serverless/"&gt;serverless&lt;/a&gt;, another technology we use at Fabric to make commerce more extensible and scalable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Event Driven Systems
&lt;/h2&gt;

&lt;p&gt;Event driven systems are ideal for commerce since many commerce setups contain different services and &lt;a href="https://dev.to/answers/ecommerce-microservices-architecture"&gt;microservices&lt;/a&gt; that benefit from being decoupled. Payments, promotions, pricing, subscriptions, search, and reviews are just a few of these services.&lt;/p&gt;

&lt;p&gt;Decoupling allows for a faster development cycle since each service can be developed independently, cheaper runtime since each service can be scaled independently, and &lt;a href="https://dev.to/blog/jamstack-ecommerce-story"&gt;faster page load times&lt;/a&gt;. Event driven systems facilitate the decoupling of services as different services can talk to each other through the event router without directly depending on each other.&lt;/p&gt;

&lt;p&gt;Another benefit of event driven systems are their potential to handle faulty situations. If two services rely on each other and one of them fails for any reason, the other one wouldn't be able to function properly. But through the help of event routers, if a service becomes inaccessible for any reason (such as updates or errors), the event router stores the event until it can be delivered to its intended handlers. For instance, in a scenario where the payment processing script is unavailable, the shopping cart should still be functional.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post we provided an overview of how event driven systems streamline commerce. For a more detailed view of how events and event routers allow different services to communicate better in a commerce tech stack, check out &lt;a href="https://www.youtube.com/watch?v=TXh5oU_yo9M&amp;amp;feature=emb_title"&gt;this video from AWS&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>serverless</category>
      <category>architecture</category>
      <category>api</category>
    </item>
    <item>
      <title>Optimizing First-Frame Bitrate for HLS with Serverless Edge Compute</title>
      <dc:creator>Robert Gibb</dc:creator>
      <pubDate>Wed, 18 Dec 2019 20:25:01 +0000</pubDate>
      <link>https://dev.to/stackpath/optimizing-first-frame-bitrate-for-hls-with-serverless-edge-compute-jmi</link>
      <guid>https://dev.to/stackpath/optimizing-first-frame-bitrate-for-hls-with-serverless-edge-compute-jmi</guid>
      <description>&lt;p&gt;In this article we’ll show you how to use StackPath’s &lt;a href="https://www.stackpath.com/products/edge-computing/serverless-scripting/" rel="noopener noreferrer"&gt;serverless edge product&lt;/a&gt; with HLS to deliver the right bitrate to the right device with the lowest possible delay. Whether you’re using cloud serverless for HLS or a different streaming solution entirely, this article will introduce you to the possibilities of optimizing streams with serverless scripting and low-latency edge compute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This tutorial is detailed and requires considerable effort to follow, but completing it can significantly reduce content delivery costs for you and buffer time for your users. It could be the catalyst for inspiring an evolution of how your video content is consumed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When responding to an HLS request, the streaming server determines which video quality (i.e. ts file) the client will attempt to play before switching to a lower or higher quality video. This switch depends on available bandwidth and device type.&lt;/p&gt;

&lt;p&gt;Switching to the wrong quality first degrades the user experience considerably. For instance, sending a high quality file to a low-end device may cause the video/device to freeze, even on a good connection. And sending a low quality file to a high-end device with a good connection may cause the viewer to experience a low quality viewing experience for a prolonged period.&lt;/p&gt;

&lt;p&gt;It may seem that sending a medium quality file first is a good solution, but it’s actually quite lazy. Instead, you can solve for the best solution in every case by using serverless scripting.&lt;/p&gt;

&lt;p&gt;Serverless scripting, also known as function-as-a-service (FaaS), allows you to optimize responses on a per-device, per-request basis without touching your origin server’s configuration or code. There’s also no loss of cacheability and you can decrease latency further by making the decision at the edge instead of in a far-off data center.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding how HLS works
&lt;/h3&gt;

&lt;p&gt;For developers who are not already aware, HLS works by sending the client an index, also known as a manifest. This index contains a list of ts files called chunks that make up the video. The client requests the appropriate chunk based on play time.&lt;/p&gt;

&lt;p&gt;In an adaptive bitrate implementation, the manifest provides links to several alternative playlists called variants instead of listing chunks directly. All variants have an identical number of chunks and video content, but they differ in bitrate and resolution (i.e. quality).&lt;/p&gt;

&lt;p&gt;According to the HLS spec, clients should request the top listed quality in the manifest, play it, and calculate the available bandwidth (i.e. chunk size and download time). Based on the calculation, the client switches to a higher or lower quality until the bitrate of the video is lower than the maximum available bandwidth. This ensures seamless play.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Falternative_hls_index_files.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Falternative_hls_index_files.png" alt="Sample HLS index listing three variants"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(The &lt;strong&gt;image&lt;/strong&gt; above is a sample HLS index listing three variants, each with a different bandwidth, resolution, and index file.)&lt;/p&gt;

&lt;p&gt;To summarize, choosing the video quality based on the available bandwidth is the responsibility of the client, but choosing the default quality to start with is the responsibility of the server. This default quality is usually either the highest one, or simply the first one uploaded to the server. In all cases, it is the same regardless of which device requests the video and where the video is requested from.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding the problems with HLS
&lt;/h3&gt;

&lt;p&gt;We’ve established that the default video quality isn’t often the best to include in the initial response with HLS, but let’s look at some different scenarios to further understand why.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario #1:&lt;/strong&gt; High-End Device | High Quality Video | Low Bandwidth | Any Screen Size&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fhls_scenario_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fhls_scenario_1.png" alt="HLS scenario with a high-end device and low bandwidth"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this scenario, the device downloads the master playlist (playlist.m3u8) and starts with best quality as dictated by the server. The device downloads the first chunk and it takes a whole minute (62,352ms). During this time, the viewer is waiting, not watching anything. This is a terrible video streaming experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario #2:&lt;/strong&gt; High-End Device | Medium Quality Video | Medium Bandwidth | Small Screen&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fhls_scenario_2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fhls_scenario_2.png" alt="HLS scenario with a high-end device and medium bandwidth"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this scenario, the device spends seven seconds downloading a 10-second second chunk. After this, it starts to play.&lt;/p&gt;

&lt;p&gt;So we’ve fixed the problem in Scenario #1 but we still waited seven seconds for the video to start. But we’re playing in a 800x450 pixel player and medium quality has a resolution of 1280x720 with a bitrate to match. Therefore, low quality is beyond enough here and waiting seven seconds for a higher quality file is unnecessary. If we started with a screen-appropriate quality we’d risk less delay before optimization catches up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario #3:&lt;/strong&gt; High-End Device | Medium Quality Video | High Bandwidth | Big Screen&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fhls_scenario_3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fhls_scenario_3.png" alt="HLS scenario with high-end device and high bandwidth"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this scenario, the device downloads the initial ts file quickly and, upon downloading, plays 10 seconds of medium quality video before switching to high quality.&lt;/p&gt;

&lt;p&gt;So we’ve eliminated the problem in Scenario #2 but we still had to tolerate 10 seconds of lower quality video before optimizing up. And the next time we load another video (even on the same site), we’re going to go through all of this again. It would be better if we could remember the current quality and start with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; There’s an edge case scenario containing a low-end device and high-quality start but there’s not much to expand on. As you might expect, the video starts to stutter and lag.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solutions for HLS problems
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Client-side solutions
&lt;/h4&gt;

&lt;p&gt;Naturally, all these scenarios are much better solved for on the client side. For example, VideoJS, one of the most popular libraries for HLS playback on the web, will immediately start with the highest quality whose resolution fits the current player size, eliminating one of the issues above.&lt;/p&gt;

&lt;p&gt;But not all players do this. For instance, &lt;a href="https://developer.android.com/guide/topics/media/exoplayer" rel="noopener noreferrer"&gt;ExoPlayer&lt;/a&gt;, Google's HLS player on Android devices, performs no such capping. However, it does let developers customize its selection logic to fit their use case.&lt;/p&gt;

&lt;h4&gt;
  
  
  Server-side solutions with serverless
&lt;/h4&gt;

&lt;p&gt;The vast majority of media servers make use of a static master playlist file and custom development is often required to create an adaptive system. But even if you have the logic for changing the order of file delivery at your origin server based on device type, for instance, what do you do with your CDN? Cache the generated master playlist for each one of hundreds of different user agents? With the request sent all the way to origin every time you see a new one? Or perhaps you don’t cache the master playlist and delegate processing to your origin completely? Either way, you’re still slowing everything down and adding a excessive load to your origin.&lt;/p&gt;

&lt;p&gt;Edge serverless solves this by allowing you to implement custom logic right at the edge. Moreover, this approach is backend agnostic. A serverless script can manipulate the fetched resources as a man-in-the-middle with zero reliance on any feature of the backend system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objective
&lt;/h2&gt;

&lt;p&gt;This tutorial will show you how to use serverless scripting to optimize HLS quality by device at request time, at the edge server, without any contribution from your origin server or any effect on cacheability.&lt;/p&gt;

&lt;p&gt;Serverless scripting allows you to write custom logic (implemented by js code) to process requests at the edge server. It can modify a client's request before fetching it and also modify the response. The possibilities with this serverless use case are endless and, therefore, the techniques in this tutorial serve as more of a demonstration than a definitive solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1) An origin serving a multi-quality HLS stream.&lt;/strong&gt; This may be powered by nginx-rtmp, Mist, Wowza, Nimble, or any form of media server. As an example we'll use the url &lt;code&gt;http://awesomeMedia/&lt;/code&gt;, serving an HLS stream at &lt;code&gt;http://awesomeMedia/hls.m3u8&lt;/code&gt;. Here is the example playlist served at that URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=5000000,RESOLUTION=1920x1080
1080p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=2800000,RESOLUTION=1280x720
720p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1400000,RESOLUTION=842x480
480p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=800000,RESOLUTION=640x360
360p.m3u8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Normally an encoder in your media pipeline will take care of auto generating this. In the file above, we are serving four different qualities at 1080p, 720p, 480p, and 360p—each with a defined bitrate appropriate to the resolution, as defined in the #EXT-X-STREAM-INF lines above.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) A &lt;a href="https://control.stackpath.com/register/" rel="noopener noreferrer"&gt;StackPath account&lt;/a&gt; with serverless scripting enabled.&lt;/strong&gt; Refer to &lt;a href="https://support.stackpath.com/hc/en-us/articles/360001455123" rel="noopener noreferrer"&gt;this tutorial&lt;/a&gt; for instructions on enabling serverless for your site for $10/month. Alternatively, you can test the concept for free using &lt;a href="https://sandbox.edgeengine.io/" rel="noopener noreferrer"&gt;the Sandbox&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Set up your site with serverless
&lt;/h2&gt;

&lt;p&gt;The first step is to sign up for a StackPath account, add your site, and enable Serverless Scripting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1)&lt;/strong&gt; Sign up for a StackPath account &lt;a href="https://control.stackpath.com/register/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Then select &lt;strong&gt;Website &amp;amp; Application Services&lt;/strong&gt; and choose the services you require.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fchoose_stackpath_service.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fchoose_stackpath_service.png" alt="StackPath control panel page showing various services"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Edge Delivery Bundle&lt;/strong&gt; is recommended but the bare minimum is the &lt;strong&gt;CDN&lt;/strong&gt; service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fchoose_edge_delivery_service.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fchoose_edge_delivery_service.png" alt="StackPath control panel page showing various edge delivery services"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2)&lt;/strong&gt; Log in to the StackPath Control Panel and create/select an active Stack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3)&lt;/strong&gt; Select &lt;strong&gt;Sites&lt;/strong&gt; in the left-hand navigation bar and click &lt;strong&gt;Create Site&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_create_site.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_create_site.png" alt="Sites overview in StackPath CP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4)&lt;/strong&gt; Check &lt;strong&gt;Serverless Scripting&lt;/strong&gt;, enter your domain (or IP address), and select &lt;strong&gt;Continue&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_create_site_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_create_site_1.png" alt="Create sites page in StackPath CP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5)&lt;/strong&gt; Finally, take note of your site’s edge address, which you can now use for delivery of your content all over the globe. Alternatively, you can use the StackPath DNS service to have your domain name resolve to the ideal edge and always serve content from StackPath’s CDN.&lt;/p&gt;

&lt;p&gt;In this example, our HLS video is now accessible at &lt;code&gt;http://j3z000b6.stackpathcdn.com/HLS.m3u8&lt;/code&gt; with the &lt;code&gt;.ts&lt;/code&gt; chunks available on the same path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create your script
&lt;/h2&gt;

&lt;p&gt;Now that we have a site with Serverless Scripting enabled we can deploy our first script. We can do this manually through the &lt;a href="https://control.stackpath.com/" rel="noopener noreferrer"&gt;web control panel&lt;/a&gt; or through the &lt;a href="https://github.com/stackpath/serverless-scripting-cli" rel="noopener noreferrer"&gt;Serverless Scripting CLI&lt;/a&gt;. The web option is suitable for testing or manual deployment. It’s simple, intuitive, and comes with a nice web editor. But if you’re developing with your own tools or integrating with a CI/CD pipeline you’ll appreciate the ease and automatability of a CLI single-line deployment.&lt;/p&gt;

&lt;p&gt;In this step our objective is to deploy a script to process requests for the path &lt;code&gt;http://awesomeMedia/HLS.m3u8&lt;/code&gt;. When a user visits the site, rather than fetching the file HLS.m3u8 straight from cache (or origin), the serverless engine will execute the script to decide what happens. The script could choose to block the request, deliver the file normally, modify the file, or write a new response altogether.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 1: Using the control panel
&lt;/h3&gt;

&lt;p&gt;Navigate to the control panel and click &lt;strong&gt;Scripts&lt;/strong&gt;. Then click the &lt;strong&gt;Add Script&lt;/strong&gt; button. You’ll be taken straight to the code editor. Enter a name for your script and the route it will run for. You can also enter multiple routes, or use wildcards.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_serverless_scripts_cp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_serverless_scripts_cp.png" alt="StackPath serverless scripts CP page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fcreate_serverless_script_stackpath_cp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fcreate_serverless_script_stackpath_cp.png" alt="Enter script name in StackPath cp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 2: Using the serverless CLI
&lt;/h3&gt;

&lt;p&gt;To use the Serverless CLI we’ll have to install the CLI package, create a configuration file named &lt;code&gt;sp-serverless.json&lt;/code&gt; in the project directory, and publish. But first we’ll need an API key and ID information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1)&lt;/strong&gt; Using the dashboard, take note of your &lt;strong&gt;Stack ID&lt;/strong&gt; and &lt;strong&gt;Site ID&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_stack_id_site_id.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_stack_id_site_id.png" alt="StackPath site id and stack id"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2)&lt;/strong&gt; To generate API keys for use by the CLI, go to &lt;strong&gt;API Management&lt;/strong&gt; in the dashboard and click &lt;strong&gt;Generate Credentials&lt;/strong&gt;. Then record the &lt;strong&gt;Client ID&lt;/strong&gt; and &lt;strong&gt;API Client Secret&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_api_management_cp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_api_management_cp.png" alt="StackPath API management"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_api_client_secret.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_api_client_secret.png" alt="StackPath API client secret"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3)&lt;/strong&gt; Install the CLI by running the following command in a terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g @stackpath/serverless-scripting-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4)&lt;/strong&gt; To configure the CLI, run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sp-serverless auth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will set up authentication for the CLI to use your account. It will ask you for the details above interactively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_cli_authentication.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_cli_authentication.png" alt="StackPath client terminal"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5)&lt;/strong&gt; Clone the skeleton repository by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/stackpath/serverless-scripting-examples/tree/master/hls-initialization.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or create your files manually. If you do this, create a directory for the project. Then create the following two files.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A file named &lt;code&gt;sp-serverless.json&lt;/code&gt; with the following content. Substitute your Stack ID and Site ID where indicated.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "stack_id": "",
  "site_id": "",
  "scripts": [
    {
      "name": "HLS Optimizer",
      "paths": [
        "HLS.m3u8"
      ],
      "file": "default.js”
     }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file indicates the deployment details to the Serverless CLI. The paths array indicates the routes your script will be active for and multiple scripts/paths can be configured using the same file. You can add extra scripts for extra routes by copying the first object in the scripts array and modifying as needed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A file named &lt;code&gt;default.js&lt;/code&gt; with the following content. This will be the script you’ll deploy. For now it’s the default skeleton script and has no effect on response.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;addEventListener("fetch", event =&amp;gt; {
  event.respondWith(handleRequest(event.request));
});
/**
 * Fetch and return the request body
 * @param {Request} request
 */
async function handleRequest(request) {
  try {
    /* Modify request here before sending it with fetch */
    let response = await fetch(request);

    /* Modify response here before returning it */
    return response;
  } catch (e) {
    return new Response(e.stack || e, { status: 500 });
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6)&lt;/strong&gt; Finally, deploy by issuing the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sp-serverless deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the command to run every time you update your script or make changes to its configuration in the &lt;code&gt;sp-serverless.json&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;From this point forward this is a development tutorial. You can skip this and use the finished product by going directly to Step 6 at the end of the tutorial, with one caveat: you’ll need to deploy a container as detailed in Step 3. Alternatively, if you’d like to follow along without creating a StackPath account you can use &lt;a href="https://sandbox.edgeengine.io/" rel="noopener noreferrer"&gt;the Sandbox&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Parse the HLS master playlist
&lt;/h2&gt;

&lt;p&gt;If using the Sandbox, start by filling in the origin route you’d like to apply the script to in the URL textbox (e.g. &lt;code&gt;http://awesomeMedia/HLS.m3u8&lt;/code&gt;) and select &lt;strong&gt;Raw&lt;/strong&gt; as the display mode (optional). The Sandbox provides console access, so that you can issue console.log commands and view stack traces live. Other options for &lt;a href="https://developer.stackpath.com/docs/en/EdgeEngine/debug/" rel="noopener noreferrer"&gt;debugging serverless scripts&lt;/a&gt; are to return information in the body of the response, or in its headers.&lt;/p&gt;

&lt;p&gt;The default (skeleton) script just returns the original manifest as is. The essential ingredient is simply a request handler bound to fetch that receives the original request from the client, fetches the response, and returns it untouched.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;addEventListener("fetch", event =&amp;gt; {
  event.respondWith(handleRequest(event.request));
});

/**
 * Fetch and return the request body
 * @param {Request} request
 */
async function handleRequest(request) {
  try {
    /* Modify request here before sending it with fetch */

    let response = await fetch(request);

    /* Modify response here before returning it */

    return response;
  } catch (e) {
    return new Response(e.stack || e, { status: 500 });
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code is helpfully marked with suggested locations for modifying the request, the response, and returning them.&lt;/p&gt;

&lt;p&gt;First, let's pass the request upstream and examine the response before returning it.&lt;/p&gt;

&lt;p&gt;Modify the script as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* Modify response here before returning it */
const upstreamContent = await response.text();
/* Log the response body to the console */
console.log(upstreamContent)

return new Response(upstreamContent, {
    status: response.status,
    statusText: response.statusText,
    headers: response.headers
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After clicking &lt;strong&gt;Run Script&lt;/strong&gt; we are greeted with the result in the console, showing the contents of the master playlist:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[info] #EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=800000,RESOLUTION=640x360
360p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1400000,RESOLUTION=842x480
480p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=2800000,RESOLUTION=1280x720
720p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=5000000,RESOLUTION=1920x1080
1080p.m3u8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; We’ve changed the original line returning the response in the code. Rather than simply returning the same response object, we’re generating a new one with the same header and content. This is because attempting to return the original object after having already resolved the promise of text() will result in an error.&lt;/p&gt;

&lt;p&gt;We now have the HLS master playlist in a variable so it’s time to parse it for available variants (qualities).&lt;/p&gt;

&lt;p&gt;For a master playlist with several variants, the playlist starts with several “header” tags, then defines every variant in a line beginning with “EXT-X-STREAM-INF”. More information can be found in the &lt;a href="https://tools.ietf.org/html/rfc8216#section-8.4" rel="noopener noreferrer"&gt;HLS spec&lt;/a&gt;. But for the purposes of this script a &lt;a href="https://regexr.com/4lpoj" rel="noopener noreferrer"&gt;regex solution&lt;/a&gt; is ideal, as implemented in this new function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function parseM3u8(body) {
  // Parse M3U8 manifest into an object array, containing bitrate, resolution, and codec
  var regex = /^#EXT-X-STREAM-INF:BANDWIDTH=(\d+)(?:,RESOLUTION=(\d+x\d+))?,?(.*)\r?\n(.*)$/gm;
  var qualities = [];
  while ((match = regex.exec(body)) != null) {
    qualities.push({bitrate: parseInt(match[1]), resolution: match[2], playlist: match[4], codec: match[3]});
  }
  return qualities;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function continues to look for #EXT-X-STREAM-INF lines and extracts bandwidth, resolution, playlist link, and, finally, preserves any extra information found (usually codec information). To explore how it works, you can use a handy tool called RegExr.&lt;/p&gt;

&lt;p&gt;To test this function, we can add a console log line anywhere in the handler function:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;console.log(parseM3u8(upstreamContent))&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[info] [ { bitrate: 800000,
    resolution: '640x360',
    playlist: '360p.m3u8',
    codec: '' },
  { bitrate: 1400000,
    resolution: '842x480',
    playlist: '480p.m3u8',
    codec: '' },
  { bitrate: 2800000,
    resolution: '1280x720',
    playlist: '720p.m3u8',
    codec: '' },
  { bitrate: 5000000,
    resolution: '1920x1080',
    playlist: '1080p.m3u8',
    codec: '' } ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And…success! The test worked as expected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Parsing the User Agent
&lt;/h2&gt;

&lt;p&gt;By examining the headers of the request before sending it upstream we can find the user agent of the client browser. In the marked location of the skeleton script for modifying requests, we log the user agent by adding two lines of code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/* Modify request here before sending it with fetch */
const userAgent = request.headers.get('User-Agent');
console.log(userAgent)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[info] Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Aside from the original headers sent by the browser, there is a set of additional StackPath headers that are added to the original request. These contain the IP address, location, server region, and other information about the request. You can find a list of them in the &lt;a href="https://developer.stackpath.com/docs/en/EdgeEngine/request-header-variables/" rel="noopener noreferrer"&gt;StackPath Developer Docs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In order to make meaningful decisions about which quality to prioritize and which video resolutions to send, there are three main pieces of information we need to learn about from the client device:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Type (desktop/phone/tablet)&lt;/li&gt;
&lt;li&gt;Screen resolution&lt;/li&gt;
&lt;li&gt;Power measurements to be used for deciding whether to cap at a certain quality for devices with poor processing capabilities &lt;strong&gt;(optional)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While there are many JavaScript libraries for parsing the user agent, none of them can tell us the screen resolution of a mobile device. Doing so not only requires a complicated parser that understands all the different ways UA strings are formed, but also a database with information about the devices once they’re identified. This parser and database combination is formally called a &lt;a href="https://en.wikipedia.org/wiki/Device_Description_Repository" rel="noopener noreferrer"&gt;Device Description Repository (DDR)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Today, there are plenty of commercial DDR services. But only one is both reliable and free: &lt;a href="http://openddr.mobi/" rel="noopener noreferrer"&gt;OpenDDR&lt;/a&gt;. This service can be deployed in the form of a docker container, providing a simple API that returns JSON-formatted data for a given UA that we will call from within our script.&lt;/p&gt;

&lt;h3&gt;
  
  
  Calling the API in Serverless
&lt;/h3&gt;

&lt;p&gt;Before deploying OpenDDR, we’ll use the &lt;a href="http://openddr.demo.jelastic.com/servlet/" rel="noopener noreferrer"&gt;demo endpoint&lt;/a&gt; to develop the necessary function in our script and test the integration. The function fires a new http request to the API, parses the response, and passes the returned information to the main handler.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; /* Modify request here before sending it with fetch */
    const userAgent = request.headers.get('User-Agent');
    console.log(userAgent);
    /* Consult the DDR microservice in regards to the user agent */
    const deviceData = await getDeviceData(userAgent);
    console.log(deviceData);

async function getDeviceData(ua) {
  try {
    const res = await fetch('http://openddr.demo.jelastic.com/servlet/classify?ua='+ua);
    const data = await res.json();
    return data.results.attributes;
  } catch (e) {
    throw new Error('DDR communication failed')
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[info] Mozilla/5.0 (Linux; Android 8.0; Pixel 2 Build/OPD3.170816.012) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Mobile Safari/537.36
[info] { model: 'Pixel XL',
  ajax_support_getelementbyid: 'true',
  marketing_name: 'Pixel XL',
  is_bot: 'false',
  from: 'openddr',
  displayUnit: 'pixel',
  displayWidth: '1440',
  device_os: 'Android',
  id: 'PixelXL',
  xhtml_format_as_attribute: 'false',
  dual_orientation: 'true',
  nokia_series: '0',
  device_os_version: '8.0',
  nokia_edition: '0',
  vendor: 'Google',
  cpu: 'Qualcomm Snapdragon 821',
  mobile_browser_version: '67.0.3396.87',
  ajax_support_events: 'true',
  is_desktop: 'false',
  cpuRegister: '64-Bit',
  image_inlining: 'true',
  ajax_support_inner_html: 'true',
  ajax_support_event_listener: 'true',
  mobile_browser: 'Chrome',
  ajax_manipulate_css: 'true',
  displayHeight: '2560',
  cpuCores: 'Quad-Core',
  is_tablet: 'false',
  memoryInternal: '32/128GB, 4GB RAM',
  inputDevices: 'touchscreen',
  ajax_support_javascript: 'true',
  cpuFrequency: '2150 MHz',
  is_wireless_device: 'true',
  ajax_manipulate_dom: 'true',
  is_mobile: 'true',
  xhtml_format_as_css_property: 'false' }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The response from the DDR contains more info than the UA itself has. In particular, we are interested in displayWidth and displayHeight, but the information in there is pretty extensive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; For a desktop (or laptop) device, the display resolution is still rarely identifiable using this method. For example, the UA for Google Chrome 77.0 on Windows 10 is the same regardless of your display. Also, viewers may rarely play the video in full screen depending on the type of content you’re serving.&lt;/p&gt;

&lt;p&gt;Deploying a container&lt;/p&gt;

&lt;p&gt;The function above is currently making use of the demo API endpoint offered on the OpenDDR website and is unsuitable for production use. Therefore, we’re going to be deploying our own instance using an &lt;a href="https://www.stackpath.com/products/edge-computing/containers/" rel="noopener noreferrer"&gt;Edge Container&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;StackPath’s Edge Containers offer a way of hosting containers in diverse locations so that no request has to travel all the way to a centralized computing node. This is particularly useful for stateless applications such as the one we’re deploying. (A more detailed tutorial can be found &lt;a href="https://support.stackpath.com/hc/en-us/articles/360022756051-Getting-Started-With-StackPath-Edge-Computing" rel="noopener noreferrer"&gt;here&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;To start, head back to the control panel and click &lt;strong&gt;Workloads&lt;/strong&gt;. Then click &lt;strong&gt;Continue&lt;/strong&gt; to add Edge Compute to your Stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fedge-compute-workload.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fedge-compute-workload.png" alt="StackPath edge compute workloads"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, click &lt;strong&gt;Create Workload&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fedge-compute-workload-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fedge-compute-workload-1.png" alt="StackPath edge compute workloads"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can configure your container:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: (Any name works)&lt;/li&gt;
&lt;li&gt;Image: Enter &lt;strong&gt;0x41mmar/openddr-api:latest&lt;/strong&gt;, which refers to &lt;a href="https://hub.docker.com/r/0x41mmar/openddr-api" rel="noopener noreferrer"&gt;this container&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Anycast IP: Disabled for now (more on this below)&lt;/li&gt;
&lt;li&gt;Public Ports: Enter 8080, the only port exposed by the container image above&lt;/li&gt;
&lt;li&gt;Spec: The resources allocated to each instance. SP-1 (1 vCPU, 2GB Mem) should be enough.&lt;/li&gt;
&lt;li&gt;Deployment Target: Locations where the container will be deployed. Select a name and one or more locations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Unlike the ephemeral IPs assigned to containers when they’re started, the Anycast IP is a static IP assigned to the workload for its entire lifetime. The Anycast IP has significant performance benefits. Traffic towards an Anycast IP will enter the StackPath Network at the closest Edge Location and be routed to the instance using [StackPath’s private backbone(&lt;a href="https://blog.stackpath.com/network-backbone-speed/" rel="noopener noreferrer"&gt;https://blog.stackpath.com/network-backbone-speed/&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Finally, click &lt;strong&gt;Create Workload&lt;/strong&gt;. You’ll be redirected to the container’s overview page where you’ll see the container starting. When it’s fired up, you’ll see the IP addresses associated with it. Make note of the public IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_instance.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fstackpath_instance.png" alt="StackPath instances"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, let’s replace the demo URL in the classifier function with our own container’s, using the IP above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const res = await fetch('http://101.139.43.73:8080/classify?ua='+ua);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Decision Tree
&lt;/h2&gt;

&lt;p&gt;The information required to make a decision about the optimization has been collected. Now the objective is to decide on the order of variants in the master playlist. We’ll create a simple algorithm to determine the best way to sort the variants in the file based on the following rules:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Never start with a variant whose resolution is higher than the display, even if the device will optimize up afterwards.&lt;/li&gt;
&lt;li&gt;If the best quality variant satisfying condition (1) has a relatively high bitrate (&amp;gt;=4 Mbps), start one step lower.&lt;/li&gt;
&lt;li&gt;If device is quite old/low-end, start at the lowest quality and cap to display resolution to avoid performance issues.&lt;/li&gt;
&lt;li&gt;If nothing is known for sure (desktop devices), assume resolution is appropriate for 720p and apply the rules above.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Rule 4 makes sense because, according to &lt;a href="https://gs.statcounter.com/screen-resolution-stats/desktop/worldwide/#monthly-201808-201908-bar" rel="noopener noreferrer"&gt;StatCounter&lt;/a&gt;, at least 52.91% of desktop/laptop devices in use today have a display above 720p, but only 19% have 1080p displays.&lt;/p&gt;

&lt;p&gt;The resulting algorithm is shown in the diagram below. Note that this is not a definitive solution and can be changed easily to fit your unique scenario.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fhls_algorithm_diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.stackpath.com%2Fwp-content%2Fuploads%2F2019%2F11%2Fhls_algorithm_diagram.png" alt="HLS algorithm diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is straightforward to implement in JavaScript, as is the execution of the sorting and capping. This translates into the following function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function decisionTree(deviceData, qualities) {
  /* Logic deciding on ordering and capping of available qualities */
  /* Returns config object to the spec of the output function above */

  // get higher dimension of resolution
  var res = Math.max.apply({},[deviceData.displayHeight,deviceData.displayWidth]);
  //is it a desktop device? assume 1280x720
  if (deviceData.is_desktop == "true")
    return {top: 2, cap: false, res: 1280};
  else {
    // mobile device. Is it an old device?
    if ((deviceData.device_os == "iOS" &amp;amp;&amp;amp; parseInt(deviceData.device_os_version) &amp;lt; 7) || (deviceData.device_os == "Android" &amp;amp;&amp;amp; parseInt(deviceData.device_os_version) &amp;lt; 6) || (deviceData['release-year'] &amp;lt; 2012))
return {top: -1, cap: true, res: res};
   else
     return {top: 2, cap: false, res: res};
  }

  //default
  return {top: 2, cap: false, res: res}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All we have to return is the assumed display resolution, our order decision, and capping decision. The code is more or less a direct implementation of the graph above. One thing to note is the nature of variables returned by OpenDDR, as most of them are strings and require some conversion.&lt;/p&gt;

&lt;p&gt;More complex decision making is easy to implement in much the same way. For example, Apple provides guidelines on bitrates for particular resolutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Output and Putting All Together
&lt;/h2&gt;

&lt;p&gt;Now we need to rewrite the master playlist from scratch using the available information and decisions from the decision tree above. The function below is extensively commented and easily readable, but is basically sorting qualities, moving the required quality to the top of the list, and, if needed, deleting one that are too high.&lt;/p&gt;

&lt;p&gt;First, we check if certain qualities need to be removed by checking the boolean config.cap. If some do, we filter the qualities array to remove offending variants unless there is only one of them.&lt;/p&gt;

&lt;p&gt;Second, we sort the available variants in descending order by bitrate, unless it is required to place the lowest one on top (config.top=-1). If the top quality is to be the first, we’re done. If we’re to apply the 4Mbps rule, then we progressively test qualities for the required conditions (&amp;lt;=display, &amp;lt;4Mb).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function writeSortedManifest(qualities, config) {
  /* Sort qualities, optionally cap at a certain resolution, */
  /* then rewrite into correct HLS master playlist syntax */
  /* config = {cap: bool, top: int, res: int} */
  // top = 1: highest quality first, 2: highest quality within res or next one if &amp;gt;4Mbps, 0: middle quality first, -1: lowest quality first

  //cap
  //remove qualities with a resolution higher than a certain value (player resolution) 
  if (config.cap) {
    newQualities = qualities.filter((x)=&amp;gt; Math.max.apply({},x.resolution.split('x')) &amp;lt;= config.res );
  // anything left?
  if (newQualities.length &amp;gt; 0)
    qualities = newQualities;
  }

  // sort array so either best or worst quality is the first
  // dir = 1 for descending, -1 for ascending. use 1 for anything except top==-1
  // if top==-1, this is all we need to do
  var dir = config.top==-1 ? -1: 1;
  qualities.sort((a,b) =&amp;gt; (a.bitrate&amp;gt;b.bitrate)? -1*dir: dir)

  //if applying resolution rule (top==2), process from top to bottom to find variant satisfying conditions
  if (config.top==2) {
    for (var i in qualities) {
      // assume it's this one for now
      var topChoice = qualities[i];

      // convert "1280x720" to 1280; accomodate top dimension and the rest will be fine if using a sane aspect ratio
      var topDim = qualities[i].resolution.split('x').sort()[1];

      // For this variant:
      // is res &amp;lt;= display?
      if (topDim &amp;lt;= config.res) {
        // yes! but is bitrate &amp;lt;4Mbps?
        if (qualities[i].bitrate &amp;lt; 4000000) {
          // great! done here.
          break;
        }
        else if (qualities[i+1]) {
          //it's not, so choose next option if it exists
          topChoice = qualities[i+1];
          break;
        }
        else {
          // next option doesn't exist? ok, fine. We'll take this one
          break;
        }
      }
    }
  // Now let's move the top choice top
  qualities.splice(0,0,topChoice);
  }

  //if middle quality required to be the first, move it there
  if (config.top==0) {
    var m = Math.floor(qualities.length/2);
    var middleItem = qualities[m];
    qualities.splice(m,1);
    qualities.splice(0,0,middleItem); 
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Going back to the main handler function, we just need to call the functions and pass the data as needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function handleRequest(request) {
  try {
    const userAgent = request.headers.get('User-Agent');
    var deviceData, response;
    [deviceData, response] = await Promise.all([getDeviceData(userAgent), fetch(request)]);
    const upstreamContent = await response.text();
    var variants = parseM3u8(upstreamContent);
    var config = decisionTree(deviceData, variants);
    var output = writeSortedManifest(variants, config)
    /* Return modified response */
    response.headers.set("Content-Length", output.length)
    return new Response(output, {
    status: response.status,
    statusText: response.statusText,
    headers: response.headers
    });
  } catch (e) {
    return new Response(e.stack || e, { status: 500 });
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how we’ve made sure to send the two external IO requests asynchronously. The DDR request has no dependency on the upstream fetch, and vice versa. This saves us time.&lt;/p&gt;

&lt;p&gt;One little quirk in this code is the line right before the return statement that calculates the new content-length. This is necessary because the length (in bytes) of the body may have changed after being regenerated due to the ambiguity of new line characters. Our generator function only uses new line characters (\n), but the original file may or may not have used carriage returns (\r). A simple alternative would be to preserve the original line breaks with a slight modification of the regex and generator function, and have the final line be the exact same length as the original.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Final Deployment and Testing
&lt;/h2&gt;

&lt;p&gt;If you’re using the web control panel to upload your code you can find the complete code on GitHub &lt;a href="https://github.com/stackpath/serverless-scripting-examples/tree/master/hls-initialization" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Or, if you’re using the CLI and have followed the instructions in Step 1, you’ll have cloned a repository that already contains this code. Simply modify your &lt;code&gt;sp-serverless.json&lt;/code&gt; file to use hls.js rather than &lt;code&gt;default.js&lt;/code&gt; and replace the URL in line 53 (getDeviceData) with that of your own openddr container, per Step 3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "stack_id": "",
  "site_id": "",
  "scripts": [
    {
      "name": "HLS Optimizer",
      "paths": [
        "HLS.m3u8"
      ],
      "file": "hls.js"
     }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the file will have had a script ID added automatically if you’ve deployed before. If you have, be sure to leave it in, then simply enter the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sp-serverless deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it!&lt;/p&gt;

&lt;p&gt;With the script saved and deployed, we can now test the response with a variety of devices. The fastest way to do this is to use the Developer Tools available in your favorite browser. You can also use an extension/addon for the purpose of device switching.&lt;/p&gt;

&lt;p&gt;Here are the results for a few different devices:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Device&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;UA&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;First Variant&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Capping&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Comment&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Windows Laptop&lt;/td&gt;
&lt;td&gt;Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36&lt;/td&gt;
&lt;td&gt;720p&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Guessing display is 720p capable, and bitrate of 720p variant is 2.8Mbps, acceptable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google Pixel 2&lt;/td&gt;
&lt;td&gt;Mozilla/5.0 (Linux; Android 8.0; Pixel 2 Build/OPD3.170816.012) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.87 Mobile Safari/537.36&lt;/td&gt;
&lt;td&gt;720p&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Definitely 1080p capable, but we’re not risking that high a bitrate on first request, falling down to 720p&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Galaxy Ace 3&lt;/td&gt;
&lt;td&gt;Mozilla/5.0 (Linux; Android 4.2.2; GT-S7275R Build/JDQ39) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.95 Mobile Safari/537.36&lt;/td&gt;
&lt;td&gt;360p&lt;/td&gt;
&lt;td&gt;360p is the only quality sent&lt;/td&gt;
&lt;td&gt;Old low-end device, anything more is quite wasted on it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HTC One M8&lt;/td&gt;
&lt;td&gt;Mozilla/5.0 (Linux; Android 4.4.2; HTC6525LVW Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.59 Mobile Safari/537.36&lt;/td&gt;
&lt;td&gt;360p&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;td&gt;Old device now, but not low-end, and display is good. Starting low but sending all qualities.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article demonstrates the potential for Serverless Scripting as a solution for HLS streaming optimization. The ideal solution, as always, remains proper optimization on the client side. But with so many different players implementing so many different behaviors a solution of this type can make a significant difference to the end viewer’s experience. Serverless Scripting makes this quite easy and accessible, particularly when compared to the alternative of dynamic processing at the origin.&lt;/p&gt;

&lt;p&gt;The decision algorithm implemented may or may not be fitting for a particular audience or system, but we hope that it is explained and demonstrated satisfactorily for any developer to modify it to fit her own needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Taking this even further
&lt;/h3&gt;

&lt;p&gt;A number of improvements can be made to the script to make it even more powerful.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cookies:&lt;/strong&gt; The entire problem of first-variant selection stems from the fact that the bandwidth is unknown on first play. But what if the viewer has just watched another stream from your site? Use cookies to remember what variant the client last settled on within a certain time window.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client-side device detection:&lt;/strong&gt; If the client is viewing the video inside a web or phone app you control, a bit of JS can provide much better information than we can find with the user agent alone. It’s possible to have some JS on your website that writes this information in a cookie for the Serverless Script to consume, and thus mitigate the lack of information about desktop devices (for instance). Naturally, this kills the quality of being server-agnostic and not requiring any changes to your origin configuration or deployment, but the trade off can be useful.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Checking:&lt;/strong&gt; Upgrade the script to deal with various error scenarios. What if the DDR API is unresponsive? Information ambiguous? Decide on which assumptions to make and how to order variants.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Statistics:&lt;/strong&gt; The numbers used in the assumptions above (e.g. a high bitrate is 4Mbps) are based on industry recommendations and general experience, but proper statistics are lacking. For example, we could enhance the script by making more statistically sound assumptions about available bandwidth for a particular region—or even for a particular class of device in a particular region, and so on.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Statistics II:&lt;/strong&gt; Following from the point above, we can use a microservice with a db (similar to the DDR one above) to keep track of which devices, IPs, and regions settle on certain qualities, or which have problems starting the stream. This is useful for studying your audience and optimizing your configuration, but bonus points if you use the information automatically while sorting variants to adjust the rules as needed. For instance, if we have enough data to see that iPhones on Vodaphone 4G in Manhattan settle on 2Mbps most often, just start them there. This takes the previous point to its logical conclusion, but beware of the &lt;a href="https://en.wikipedia.org/wiki/Law_of_large_numbers" rel="noopener noreferrer"&gt;law of large numbers&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>tutorial</category>
      <category>serverless</category>
    </item>
    <item>
      <title>What is pub/sub messaging? A simple explainer.</title>
      <dc:creator>Robert Gibb</dc:creator>
      <pubDate>Fri, 12 Jul 2019 17:26:46 +0000</pubDate>
      <link>https://dev.to/gibbiv/what-is-pub-sub-messaging-a-simple-explainer-2fdk</link>
      <guid>https://dev.to/gibbiv/what-is-pub-sub-messaging-a-simple-explainer-2fdk</guid>
      <description>&lt;h2&gt;
  
  
  Definition
&lt;/h2&gt;

&lt;p&gt;Pub/sub is shorthand for publish/subscribe messaging, an asynchronous communication method in which messages are exchanged between applications without knowing the identity of the sender or recipient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Four core concepts make up the pub/sub model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Topic&lt;/strong&gt; – An intermediary channel that maintains a list of subscribers to relay messages to that are received from publishers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message&lt;/strong&gt; – Serialized messages sent to a topic by a publisher which has no knowledge of the subscribers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Publisher&lt;/strong&gt; – The application that publishes a message to a topic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subscriber&lt;/strong&gt; – An application that registers itself with the desired topic in order to receive the appropriate messages&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Advantages and disadvantages of pub/sub
&lt;/h3&gt;

&lt;p&gt;As with all technology, using pub/sub messaging comes with advantages and disadvantages. The two primary advantages are loose coupling and scalability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Loose coupling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Publishers are &lt;a href="https://hackernoon.com/observer-vs-pub-sub-pattern-50d3b27f838c"&gt;never aware of the existence of subscribers&lt;/a&gt; so that both systems can operate independently of each other. This methodology removes service dependencies that are present in traditional coupling. For example, a client generally cannot send a message to a server if the server process is not running. With pub/sub, the client is no longer concerned whether or not processes are running on the server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pub/sub messaging can scale to volumes beyond the capability of a single traditional data center. This level of scalability is primarily due to parallel operations, message caching, tree-based routing, and multiple other features built into the pub/sub model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.kuzzle.io/pub-sub-for-real-time-applications"&gt;Scalability does have a limit though&lt;/a&gt;. Increasing the number of nodes and messages also increases the chances of experiencing a load surge or slowdown. On top of that, the advantages of the pub/sub model can sometimes be overshadowed by the message delivery issues it experiences, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A publisher may only deliver messages for a certain period of time regardless of whether the message was received or not.&lt;/li&gt;
&lt;li&gt;Since the publisher does not have a window into the subscriber it will always assume that the appropriate subscriber is listening. If the subscriber isn’t listening and misses an important message it can be disastrous for production systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Pub/Sub Works
&lt;/h2&gt;

&lt;p&gt;In the overview we covered how a publisher sends a message to a topic and how the topic forwards the message to the appropriate subscriber. From a topology point of view it is a simple process.&lt;/p&gt;

&lt;p&gt;When it comes to coding the publish or the subscribe process the model can be a bit more confusing. Consider the following Java code which is used to create a topic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Topic createTopic(String topicName) throws IOException {
  String topic = getTopic(topicName); // adds project name and resource type
  Pubsub.Projects.Topics topics = pubsub.projects().topics();
  ListTopicsResponse list = topics.list(project).execute();
  if (list.getTopics() == null || !list.getTopics().contains(new Topic().setName(topic))) {
      return topics.create(topic, new Topic()).execute();
  } else {
      return new Topic().setName(topic);
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Cloud or edge providers often simplify this code. Google Cloud, for example, has simplified topic creation into a single line of code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud beta pubsub topics create topicName
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Examples of Pub/Sub
&lt;/h2&gt;

&lt;p&gt;Publish/subscribe messaging has a multitude of use cases, some of which include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Balancing workloads&lt;/li&gt;
&lt;li&gt;Asynchronous workflows&lt;/li&gt;
&lt;li&gt;Event notifications&lt;/li&gt;
&lt;li&gt;Data streaming&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Faye
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://faye.jcoglan.com/"&gt;Faye&lt;/a&gt; is an open source system based on pub/sub messaging. The code below shows you how to start a server, create a client, and send messages using Faye.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var http = require('http'),
    faye = require('faye');

var server = http.createServer(),
    bayeux = new faye.NodeAdapter({mount: '/'});

bayeux.attach(server);
server.listen(8000);

var client = new Faye.Client('http://localhost:8000/');

client.subscribe('/messages', function(message) {
  alert('Got a message: ' + message.text);
});

client.publish('/messages', {
  text: 'Hello world'
});
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Faye is much more straightforward than standard JavaScript and it was created specifically for Node.js and Ruby servers. It’s often used for online instant messaging which is a use case for pub/sub that most people experience on a daily basis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pub/sub on the edge
&lt;/h3&gt;

&lt;p&gt;As with a lot of &lt;a href="https://blog.stackpath.com/edge-computing/"&gt;edge computing benefits&lt;/a&gt;, pub/sub thrives with the speed that comes with being on the edge. A publisher must send a message to a topic, sometimes located far away in the physical world, that a subscriber is listening to. To travel across a room, a message may need to travel halfway around the world through Internet exchange points and physical distance always creates latency. &lt;/p&gt;

&lt;p&gt;On the edge that message can travel two to four times faster across the room or even across the world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Publish/subscribe messaging is when a publisher sends a message to a topic and the message is forwarded to a subscriber.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The concept of pub/sub is easy to understand but every coding and programming language handles it differently, making it a little more challenging to learn across all platforms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the edge, message delivery times can be two to four times faster by using a &lt;a href="https://blog.stackpath.com/network-backbone/"&gt;network backbone&lt;/a&gt; and multiple &lt;a href="https://blog.stackpath.com/point-of-presence/"&gt;points of presence&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>beginners</category>
      <category>pubsub</category>
    </item>
  </channel>
</rss>
