<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vrushti </title>
    <description>The latest articles on DEV Community by Vrushti  (@vrushti08).</description>
    <link>https://dev.to/vrushti08</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vrushti08"/>
    <language>en</language>
    <item>
      <title>Buying Guide: Agora, Twilio, Jitsi (JaaS), Zoom, 100ms</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Thu, 15 Sep 2022 12:48:26 +0000</pubDate>
      <link>https://dev.to/100mslive/buying-guide-agora-twilio-jitsi-jaas-zoom-100ms-3f1l</link>
      <guid>https://dev.to/100mslive/buying-guide-agora-twilio-jitsi-jaas-zoom-100ms-3f1l</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZWnG6p9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p0cgatbd5ysgdaq0ke4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZWnG6p9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p0cgatbd5ysgdaq0ke4b.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Video conferencing has been around forever. But what used to be the sporadic coast-to-coast team catch-up, or the predominantly inside sales call became a household necessity with the 2020 Covid-19 Pandemic. Everyday work. Family reunions. Even doctor consults, workout trainers, and online astrologers. Oh, and never forget the kids jumping in the background, drowning in the virtual class they were supposed to attend.&lt;/p&gt;

&lt;p&gt;That said, smart businesses have been quick to offer video-enabled communication services since, and the trend just seems to be getting started, whether it’s telehealth, test-prep, dating, or shopping. While businesses can obviously look at building this audio-video infra in-house, most find it time and resource-intensive.&lt;/p&gt;

&lt;p&gt;Of course, building a scalable video infrastructure from scratch is no mean feat, unless that’s the primary focus you want for your engineering team. Luckily there are at least a handful of Video SDK providers that offer best-in-class video infrastructure.&lt;/p&gt;

&lt;p&gt;But then, how do you decide which one works best for you?&lt;/p&gt;

&lt;p&gt;To answer that, we decided to put together this Buying Guide.&lt;/p&gt;

&lt;p&gt;First, we’ve handpicked the best of the best video SDKs based on customer reviews, product usage, and capabilities offered. And then, we battled them out on features &amp;amp; functionality, compliance &amp;amp; security, support, and pricing.&lt;/p&gt;

&lt;p&gt;From features, time to respond, implementation help, and total cost of ownership, you should find all the details you need to make an informed buying decision right here.&lt;/p&gt;

&lt;p&gt;The Buying Guide comprises four separate articles. Each article focuses on comparing vendors across a single, relevant parameter. All information for said comparison has been obtained from each vendor’s publicly available documentation.&lt;/p&gt;

&lt;p&gt;Here is a quick summary of each article. Click on the article link for a deep dive.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Functionality and Features&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Each audio-video SDK offers specific features and functions that enable its customers to meet their goals. We’ve listed out the basic offerings across Agora, Twilio, Jitsi (Jitsi as a Service), Zoom, and 100ms below.&lt;/p&gt;

&lt;p&gt;However, you can go deeper and explore how vendors provide specific features such as streaming out with RTMP and HLS, active speaker detection, chat, polls, whiteboard, hand raise, and more in our detailed article.&lt;/p&gt;

&lt;p&gt;Full article: &lt;a href="https://www.100ms.live/blog/buying-guide-features"&gt;Features &amp;amp; Functionality for for Agora, Twilio, Jitsi (JaaS), Zoom &amp;amp; 100ms&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agora&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Users can build the video-calling feature using the SDK, while interactive live streaming can be built using Agora’s Live Streaming SDK. Call recording can be enabled using the dashboard and API.&lt;/p&gt;

&lt;p&gt;Additionally, a noise reduction feature can be built using an additional integration. Agora also has a virtual background extension that allows for background modification. Background blur can be implemented using the SDK.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Twilio&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With Twilio, users can build the video-calling feature using the SDK, while interactive live streaming can be built with Twilio Live. As for noise reduction, it can be built using the SDK. Call recording can be enabled using the dashboard and API. Background modifications can be built using the Twilio video processor SDK.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jitsi (Jitsi as a Service)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Jitsi users can build video-calling, noise reduction, call recording, and background modification using the SDK. With regard to interactive live streaming, there is no explicit mention of it in the JaaS documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zoom&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With Zoom, we have examined the feature offerings of both the Zoom Video SDK as well as the Meeting SDK.&lt;/p&gt;

&lt;p&gt;Zoom Video SDK Users can build video-calling, noise reduction, and background modification features using the SDK. Call recording can be enabled using the dashboard and API. There is no explicit mention of interactive live streams in the Zoom documentation.&lt;/p&gt;

&lt;p&gt;Zoom Meeting SDK Users can build video-calling, noise reduction, and background modification features using the SDK. Call recording can be enabled using the dashboard and API. For interactive live streaming, Zoom allows for the streaming of sessions with up to 10k participants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;100ms&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;100ms enables users to build video-calling, interactive live-streaming (HLS + WebRTC in a single SDK), background modification and much more using the SDK. Noise reduction is currently available in Beta. 100ms also allows for instant streaming of video conferencing sessions with up to 10k participants.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Compliance and Security&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;With data thefts and security breaches becoming more common with each passing day, it’s important for decision-makers to understand the security features each audio-video SDK offers. To help with that, we’ve listed the compliance certifications that each audio-video infra provider holds, below.&lt;/p&gt;

&lt;p&gt;However, for more details about security features such as access control, enterprise authentication, end-to-end encryption, privacy and encryption of recordings, and audit trails, please take a look at our detailed article.&lt;/p&gt;

&lt;p&gt;Full article: &lt;a href="https://www.100ms.live/blog/buying-guide-compliance-and-security"&gt;Compliance &amp;amp; Security for Agora, Twilio, Jitsi (JaaS), Zoom &amp;amp; 100ms&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agora&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;GDPR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HIPAA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CCPA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;COPPA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ISO/IEC 27001&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SOC 2&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Twilio&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;GDPR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ISO 27001&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AICPA SOC 2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HIPAA&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Jitsi (Jitsi as a Service)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;HIPAA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GDPR compliant for data processors&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Zoom&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SOC 2 Type II&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GDPR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HIPAA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ISO/IEC 27001:2013&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;100ms&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SOC2 Type 1 &amp;amp; SOC 2 Type 2&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Support&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Vendor support is as important a factor as the features they provide. Integration and post-integration support extended by these providers will help you when something breaks on the backend, and you need expert support.&lt;/p&gt;

&lt;p&gt;While you can find a short summary explaining the support extended by Agora, Twilio, Jitsi, Zoom, and 100ms below, we’ve also created an elaborate comparison to give you a full overview in terms of Cost of Support, Integration/Account Management Support, Post-Integration Support, and Community Support.&lt;/p&gt;

&lt;p&gt;Full article: &lt;a href="https://www.100ms.live/blog/buying-guide-support"&gt;Support for Agora, Twilio, Jitsi (JaaS), Zoom &amp;amp; 100ms&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agora&lt;/strong&gt;&lt;br&gt;
Agora has different paid support plans - Standard, Premium, and Enterprise. Apart from these, the platform offers one free support plan.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Twilio&lt;/strong&gt;&lt;br&gt;
Twilio has four different support plans — Developer (free), Production (paid), Business (paid), and Personalized (paid). As for the plans, support is based on the volume of usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jitsi (Jitsi as a Service)&lt;/strong&gt;&lt;br&gt;
Jitsi’s open-source version is supported by a large community - the Jitsi community forum. Apart from this, Jitsi’s paid version, 8x8 Jitsi as a Service, includes dedicated support for strategic customers, as explained in their Global Premium Plus Support plan.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zoom&lt;/strong&gt;&lt;br&gt;
Zoom offers dedicated support for developers via Premier Developer Support which offers prioritized developer-specific resources. This guide does not explore Zoom support plans aimed at non-dev users and admins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;100ms&lt;/strong&gt;&lt;br&gt;
All support functions are available to paying customers at no additional cost. 100ms also offers testing support at no extra cost. This includes user testing, load testing, and network/device stress-testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pricing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Pricing is one of the most crucial deciding factors when it comes to purchasing an SDK. As with the other parameters, here’s a quick overview of what each vendor charges.&lt;/p&gt;

&lt;p&gt;However, details on each vendor’s pricing policies are beyond the scope of this piece. We’ve put together a detailed guide that explains the pricing models, pricing for recording, live streaming, add-ons, and more for each of these providers.&lt;/p&gt;

&lt;p&gt;Full Article: &lt;a href="https://www.100ms.live/blog/buying-guide-pricing"&gt;Pricing for Agora, Twilio, Jitsi (JaaS), Zoom &amp;amp; 100ms&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agora&lt;/strong&gt;&lt;br&gt;
Agora’s pricing is based on usage, which includes the number of minutes used, the number of users, and the unit price. However, Agora follows pricing on the basis of aggregate resolution in calls - this has been explained in detail in the main article.&lt;/p&gt;

&lt;p&gt;The unit price per 1,000 minutes is $0.99 (for audio) and $3.99 (for video HD). Pricing for Video Full HD, Video 2K, and Video 2K+ is explained in detail within the main article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Twilio&lt;/strong&gt;&lt;br&gt;
Pricing for Twilio scales with participant minutes. We have examined the pricing of two Twilio video products - Twilio P2P &amp;amp; Twilio Video Groups.&lt;/p&gt;

&lt;p&gt;Twilio P2P Allows up to 3 participants and up to 10 audio-only participants. Priced at $0.0015 per participant per minute.&lt;/p&gt;

&lt;p&gt;Twilio Video Groups Allows users to create video apps for up to 50 participants. Priced at $0.004 per participant minute - defined only by the minutes a user spends connected in a room.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jitsi (Jitsi as a Service)&lt;/strong&gt;&lt;br&gt;
Jitsi’s pricing is based on a per-user model and charges for monthly active users (MAU). According to Jitsi, an MAU is a user who attends at least one meeting with at least one other user within a particular month. An MAU is also tracked on the basis of the device they log in from.&lt;/p&gt;

&lt;p&gt;JaaS offers various plans, and the pricing for each plan varies depending on the MAU. The JaaS Dev plan allows up to 25 MAU free, and only add-ons are charged extra. Under the JaaS Basic plan, pricing is $99 per month for 300 MAUs. The pricing for JaaS Standard, JaaS Business, and plans with more than 3000 MAU are mentioned in the main article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zoom&lt;/strong&gt;&lt;br&gt;
Zoom offers two SDKs: a Video SDK(charged on the basis of usage) and a Meeting SDK (charged on per user basis).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Zoom Meeting SDK&lt;/em&gt; The Meeting SDK offers four paid tiers: Basic, Pro, Business, and Enterprise. To use the Meeting SDK, the host is the only person who must purchase and hold a license. This license carries a specific limit on the number of participants who can be supported in each tier.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Zoom Video SDK&lt;/em&gt; Under Zoom Video SDK, there are two pricing levels. With pay-as-you-go, you get 10,000 minutes per month, after which the plan is priced at $0.0035 per minute. At the second level, you pay $1000 per year with 30,000 minutes included per month. After the limit is crossed, you pay $0.003 per minute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;100ms&lt;/strong&gt;&lt;br&gt;
100ms offers a single SDK offering both video conferencing and live streaming capabilities with straightforward pricing for both use cases.&lt;/p&gt;

&lt;p&gt;100ms offers 10,000 free minutes for conferencing and another 10,000 free minutes for streaming for each business every month in addition to 1000 free encoding minutes.&lt;/p&gt;

&lt;p&gt;Beyond this, video conferencing is charged at $0.004 per participant per minute, while audio-only calls are charged at $0.001 per participant per minute. Live Streaming costs $0.004 per broadcaster and $0.0012 per viewer per minute, while additional encoding minutes are charged at $0.04.&lt;/p&gt;

&lt;p&gt;While there are various parameters and aspects that PMs, CTOs, and decision-makers need to keep in mind, these are a few that we considered indispensable. We recommend that you have a look at the separate pieces focusing on each of these parameters. They offer research-based data from each vendor, as well as helpful links you can use to conduct in-depth research yourself.&lt;/p&gt;

&lt;p&gt;To know more about how 100ms can help fill in your video conferencing requirements, &lt;a href="https://meet.100ms.live/meetings/isha-deo/intro?__hstc=159648061.f079b4acf665d0fbf04f116fc64e1893.1655282117756.1663237861593.1663245279090.164&amp;amp;__hssc=159648061.1.1663245279090&amp;amp;__hsfp=69242381"&gt;book a call with us&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>video</category>
      <category>webdev</category>
      <category>help</category>
    </item>
    <item>
      <title>Introducing 100ms Starter Kits</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 06 Sep 2022 10:18:36 +0000</pubDate>
      <link>https://dev.to/100mslive/introducing-100ms-starter-kits-4pbg</link>
      <guid>https://dev.to/100mslive/introducing-100ms-starter-kits-4pbg</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zdkFIHqN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrkkcjmio8ylbygzlwi2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zdkFIHqN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrkkcjmio8ylbygzlwi2.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building live audio or video is hard.&lt;/p&gt;

&lt;p&gt;At 100ms, we’re working on simplifying that process because we believe that nearly all apps will have live audio/video in the future — a “live-first” digital world if you will.&lt;/p&gt;

&lt;p&gt;100ms is industry-agnostic, thus enabling product managers (PMs), developers, and engineers to shape and build real-time, life-like interactions the way they see fit for a multitude of functions. In fact, with the 100ms SDK, our customers have already built diverse use cases across industries such as dating, gaming, education, and the like — delivering millions of live audio/video minutes to their users.&lt;/p&gt;

&lt;p&gt;But, as modern users increasingly demand real-time interactive experiences online, the delta for delivering value has changed astronomically. To meet these demands closely, PMs and developers want to experience a feature before they build it out themselves. This is where the 100ms Starter Kits come in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4kXNQHUP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0pos7cmy92ugruqvg4y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4kXNQHUP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0pos7cmy92ugruqvg4y.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridging Imagination and Experience&lt;/strong&gt;&lt;br&gt;
After launching our &lt;a href="https://www.100ms.live/marketplace/virtual-events-starter-kit"&gt;Virtual Event Starter Kit&lt;/a&gt; in partnership with &lt;a href="https://twitter.com/vercel/status/1499402328813850624"&gt;Vercel&lt;/a&gt;, we realized that our customers loved the fact that we were able to instantly deliver a working demo of a Virtual Event Use Case. Since the starter kit is open source, anyone could take the code to extend that experience.&lt;/p&gt;

&lt;p&gt;By enabling this, we realized that users could instantly connect their imagination to what the app would actually look like.&lt;/p&gt;

&lt;p&gt;In other words, we were able to bridge the gap between imagination and experience.&lt;/p&gt;

&lt;p&gt;This is what inspired us to build the &lt;a href="https://www.100ms.live/marketplace"&gt;100ms Starter Kits&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are these Starter Kits?&lt;/strong&gt;&lt;br&gt;
These starter kits are proof-of-concept versions of use cases constructed on real-life interactions built with the 100ms SDK. We developed them with the hope that you’ll use them to actualize a feature/app you’ve been thinking about for a while, and go “This is what I wanted to build!”&lt;/p&gt;

&lt;p&gt;For example, let’s say you wanted to simulate the action of tapping a colleague’s shoulder to discuss something — while in an online meeting. These starter kits enable you to experience that exact feature by providing a one-click demo. You can also choose to deploy them and start experimenting.&lt;/p&gt;

&lt;p&gt;These starter kits are a quick jumping-off point to demonstrate a working, proof-of-concept version of whatever you are imagining. Moreover, these kits also serve as the building blocks for implementing your own ideas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zY4r5CwW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hgjzrutgpzzqljuxbeg9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zY4r5CwW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hgjzrutgpzzqljuxbeg9.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;What can you do with these Starter Kits?&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instant Demos:&lt;/strong&gt; Immediately experience our starter kits with the View Demo option in our &lt;a href="https://www.100ms.live/marketplace"&gt;Examples&lt;/a&gt; section. This allows you to measure the audio and video quality of our SDKs with the added context of User Interfaces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VaWSwfRA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h481lfa0cb73u0kzohrh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VaWSwfRA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h481lfa0cb73u0kzohrh.png" alt="Image description" width="880" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open Source Github Repos:&lt;/strong&gt; You can download the source code of these starter kits and break it down for reference code implementations. You can also experiment and build on these starter kits with your ideas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JlAYfntw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8t4bj0zqgrd2sd1w6fn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JlAYfntw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8t4bj0zqgrd2sd1w6fn5.png" alt="Image description" width="880" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy to Vercel:&lt;/strong&gt; Deploy these starter kits to your Vercel account and experiment with them in your own environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OAoCbhib--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ic8hihoneob86h5m1qs7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OAoCbhib--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ic8hihoneob86h5m1qs7.png" alt="Image description" width="880" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Breaking down a Starter Kit&lt;/strong&gt;&lt;br&gt;
Our starter kits are open-sourced apps wrapped as frontend layers around our &lt;a href="https://www.100ms.live/blog/roles-on-100ms"&gt;template policy&lt;/a&gt; (business logic around roles &amp;amp; permissions) and the 100ms SDK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XjKI8T3u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebostrn42nukh2thy2xj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XjKI8T3u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebostrn42nukh2thy2xj.png" alt="Image description" width="880" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To break this down further, let’s use an example.&lt;/p&gt;

&lt;p&gt;Let’s say you want to build a one-click engagement solution between coworkers — something as simple as tapping a colleague’s shoulder to talk to them, but in a virtual environment (like a Slack Huddle).&lt;/p&gt;

&lt;p&gt;Before this launch, you could have created an audio room template to implement this action. But while you would’ve had an audio-first conversation with it, a holistic experience isn’t being realized. Now, with the starter kit, you can build on this experience, add an interactive one-click button to the app, and get much closer to a product that replicates real-world, in-person communication.&lt;/p&gt;

&lt;p&gt;In the above example, the audio room is the template (use case), and the frontend UI is wrapped around it, allowing for the solution to show up as a one-click engagement. Combined with the 100ms SDK, the entire package comprises a single Starter Kit app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Five Starter Kits You Can Try Out&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As of now, we have rolled out five starter kits with various use-cases with 100ms Examples. Some of them have actually been developed with the help of our amazing &lt;a href="https://discord.com/invite/kGdmszyzq2"&gt;community&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We look forward to adding more of these kits in the near future, so as to enable more relatable and delightful quick-start experiences. But for now, these are our initial rollouts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Video Conference Starter Kit&lt;/strong&gt;&lt;br&gt;
Offer your customers engaging live conference experiences with excellent audio/video quality via 100ms’ Video Conferencing Starter Kit. This is a full-fledged feature-rich starter kit for building any audio/video conferencing product.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7apJedN6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/upubenk4amkfgdrrlsor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7apJedN6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/upubenk4amkfgdrrlsor.png" alt="Image description" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Virtual Event Starter Kit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Own your live event experience with this — a virtual events starter kit with real-time audio-video interactions that you can configure on the go. With this starter kit, you can host a live event or a live workshop with 10,000 viewers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NY7KRY3s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjbgh1408le9865ln30h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NY7KRY3s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjbgh1408le9865ln30h.png" alt="Image description" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Slack Huddle Clone&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Help users get on impromptu, lightweight audio calls for a quick, real-time conversation with the 100ms Slack Huddle Clone kit. With this starter kit, you can build the “quick tap conversation” use case discussed above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YLgFFBs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7y096ysel01htblk4sav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YLgFFBs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7y096ysel01htblk4sav.png" alt="Image description" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Discord Clone&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Offer your users an excellent audio conferencing/streaming experience with in-built advanced interactivity. Use our Discord Starter Kit to host or build a community discussion experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bV0BeRMF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ijrke8qokvtpzc2a742l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bV0BeRMF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ijrke8qokvtpzc2a742l.png" alt="Image description" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Audio Room&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Build audio-first apps with this starter kit and engage your users with experiences like live audio calling, podcast streaming, clubhouse-like audio rooms, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aj6TopTr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oc1mk4d3m15n8dbz44jg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aj6TopTr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oc1mk4d3m15n8dbz44jg.png" alt="Image description" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Closing Notes&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
With the launch of Starter Kits, we aim to unveil a new dimension of experimentation for our users. So head to the Examples section from the navbar and start building your own live experience. Don’t forget to share your apps with us, because we can’t wait to see what you build!&lt;/p&gt;

&lt;p&gt;If you are looking to contribute to these Starter Kits or partner with us, reach out to us in our &lt;a href="https://discord.com/invite/kGdmszyzq2"&gt;Discord&lt;/a&gt; community!&lt;/p&gt;

</description>
      <category>developer</category>
      <category>beginners</category>
      <category>webdev</category>
      <category>ios</category>
    </item>
    <item>
      <title>Server-side Considerations for your WebRTC Infrastructure</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Fri, 19 Aug 2022 06:04:00 +0000</pubDate>
      <link>https://dev.to/100mslive/server-side-considerations-for-your-webrtc-infrastructure-4in6</link>
      <guid>https://dev.to/100mslive/server-side-considerations-for-your-webrtc-infrastructure-4in6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ixaUb0CX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qik3wb5qg0h8909mjgex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ixaUb0CX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qik3wb5qg0h8909mjgex.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Internet users today expect flawless online experiences, be it browsing Instagram, streaming BTS, or debating anime fandoms. This expectation extends to online video communication.&lt;/p&gt;

&lt;p&gt;If they are to meet the expectations of contemporary users, video conferences must offer sub-second latency and high-quality audio/video transmission. Usually, developers choose WebRTC to build video experiences of this caliber.&lt;/p&gt;

&lt;p&gt;You might’ve read or heard about how WebRTC is a client-oriented protocol that usually doesn’t require any server to function. However, that’s not the whole story.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Server in WebRTC&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It is true that some WebRTC calls are possible without any need for an external server. But, even in those cases, a signaling server is required to establish the connection.&lt;/p&gt;

&lt;p&gt;Most calls would simply fail on a direct connection and even if they do connect, issues start arising as more peers join in — something we will discuss later in the article. To solve these issues, it is recommended that you use different servers that provide workarounds and call performance optimization in WebRTC ecosystems.&lt;/p&gt;

&lt;p&gt;This article will discuss a few elements on the server side you must consider when building a WebRTC solution. We will talk about the servers, multi-peer WebRTC architecture, and how it all works — so that you make the right architecture choices for your WebRTC application.&lt;/p&gt;

&lt;p&gt;The following is the list of servers explored in this piece. Some of these servers are mandatory while the presence of others will depend on the architecture you choose to work with:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Signaling Server (Mandatory)&lt;/li&gt;
&lt;li&gt;STUN/TURN Servers (Mesh Architecture)&lt;/li&gt;
&lt;li&gt;WebRTC Media Servers
a. MCU Server (Mixing Architecture)
b. SFU Server (Routing Architecture)
c. SFU Relay (Routing Architecture)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Signaling Server&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Signaling refers to the exchange of information between peers in a network. It is required to set up, control, and terminate a WebRTC call. WebRTC doesn’t specify a rigid way for signaling peers, which makes it possible for developers to manage signaling as they see fit. To implement this out-of-band signaling, a dedicated signaling server is often used.&lt;/p&gt;

&lt;p&gt;The signaling server is mainly used to initiate a call. Once that is done, WebRTC will take over. However, this does not mean that we won’t require the signaling server once the call has started.&lt;/p&gt;

&lt;p&gt;Even though most state changes like voice mute/unmute can be notified to the other peer(s) through WebRTC data channels, the signaling server has to be present throughout the call. It must be used to handle unusual scenarios like network disconnection where the peer requires signaling to reconnect to the call again.&lt;/p&gt;

&lt;p&gt;Now, we will take a look at the multi-peer architecture in WebRTC, before moving on to the servers we might need once signaling is complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;WebRTC multi-peer architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In WebRTC, there are multiple architectures that define how the peers are connected in a call. Generally, the server-side requirements depend on the architecture that you choose. Picking the right architecture for your use case helps identify the servers you will need.&lt;/p&gt;

&lt;p&gt;We will now take a look at the most popular WebRTC architectures:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Mesh architecture&lt;/li&gt;
&lt;li&gt;Mixing architecture&lt;/li&gt;
&lt;li&gt;Routing architecture&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below, we discuss these architectures along with the servers they need to function properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Mesh Architecture (STUN/TURN Servers)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this architecture, every peer is directly connected to every other peer in the call. For example, in a call with 4 peers, every peer has to send their video to 3 other peers and receive video from the same 3.&lt;/p&gt;

&lt;p&gt;This is generally suitable for WebRTC calls with a limited number of peers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NAT restrictions and Firewall:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A &lt;a href="https://askleo.com/how_does_nat_work/"&gt;NAT(Network Address Translation)&lt;/a&gt; router is used to map the private IP address to a public IP address under its network. When a peer is behind a NAT router, it only has knowledge of its private IP address (which is invalid outside its local network). Thus, it cannot exchange its actual public IP address during signaling phase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Additionally, when the peers are behind &lt;a href="https://doc-kurento.readthedocs.io/en/6.14.0/knowledge/nat.html#symmetric-nat"&gt;Symmetric NAT&lt;/a&gt;(a type of NAT) router, it becomes impossible for the peer to connect directly due to its unique mapping technique that returns different port addresses for different connections in the network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In some cases, the Firewall on a peer’s device might block a direct connection with another peer over the internet for security reasons.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a direct connection is not possible in this architecture, NAT traversal servers like the STUN and TURN server can be used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session Traversal Utility for NAT (STUN)&lt;/strong&gt;&lt;br&gt;
A STUN server is used to retrieve the public IP address of a device behind NAT. This allows the device to communicate after learning its address on the internet. This is enough for roughly 80% of connections to be successful, but it cannot be used for cases where the peers are behind a Symmetric NAT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traversal Using Relay NAT (TURN)&lt;/strong&gt;&lt;br&gt;
A TURN server is used to relay media between peers when a direct connection is not possible — often due to a Symmetric NAT in the network or a firewall blocking connections.&lt;/p&gt;

&lt;p&gt;The TURN server is also known as the “relay” server and costs more than STUN to maintain because it relays media throughout a WebRTC connection. Since the TURN server is an extension of STUN, its implementations include a STUN server built into it by default.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For more details on STUN/TURN servers and how to use them in a simple WebRTC video app, have a look at Build your first WebRTC app with Python and React.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FoPyYA9Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zxr6e3jw4743uhxd19u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FoPyYA9Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zxr6e3jw4743uhxd19u.jpg" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages and Disadvantages of Mesh Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No need for a central media server as the connection is peer-to-peer. This reduces server costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Relatively simple to implement in WebRTC.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each participant has to send media to every other peer, which requires N-1 uplinks &amp;amp; N-1 downlinks.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Not much control over the media quality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exploring the Mixing and Routing architectures requires some familiarity with the idea of WebRTC Media Servers. So let’s start with that.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;WebRTC Media Servers&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the Mesh architecture, bandwidth expenditure becomes quite high for peers when the number of people in the call exceeds 4. The resource consumption tends to skyrocket, overheating the peer devices to the point that they can malfunction or even crash.&lt;/p&gt;

&lt;p&gt;Therefore, for use cases with more than 4 people in a call, it is recommended that you choose an architecture based around a media server.&lt;/p&gt;

&lt;p&gt;WebRTC Media Servers are central servers that peers send their media to, and receive processed media from. They act as “multimedia middleware” and can be used to offer several benefits. But, trying to implement one from scratch isn’t exactly a walk in the park.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Even if we use a media server, we must ensure that they’re available to the peers via TCP with the help of a TURN server sometimes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A well-implemented media server is highly optimized for performance and can offer numerous capabilities outside its main requirement. Here are some useful features an ideal media server should have:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simulcast&lt;/strong&gt;&lt;br&gt;
Video is served to the peers at different bitrates based on their configuration or network conditions. The peers send their video at multiple resolutions and bitrates to the media server and it chooses which version to send to each peer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recording&lt;/strong&gt;&lt;br&gt;
Call recording is made possible either by directly forwarding all incoming media from the server to storage or by connecting a custom peer to the server that receives all media streams and stores them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transcoding&lt;/strong&gt;&lt;br&gt;
Not all peers connected to the call might support the same audio/video codec. The media server should be able to resolve this issue by transcoding the audio/video to an appropriate codec supported by all, before sending the streams out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audio/Video Optimisations&lt;/strong&gt;&lt;br&gt;
Customized audio/video optimization should be possible. The server should be able to send only the media of active speakers to reduce bandwidth consumption, selectively mute audio from someone, or prioritize screen-share media over other videos.&lt;/p&gt;

&lt;p&gt;Here are some of the widely used media servers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- MCU (Multipoint Control Unit)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;- SFU (Selective Forwarding Unit)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;- SFU Relay (Distributed SFU)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s discuss the architectures (Mixing and Routing) that use these media servers to solve issues commonly faced in the Mesh architecture — high bandwidth expenditure and heavy resource consumption.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Mixing Architecture (using the MCU Server)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this setup, all peers send their media to a central media server. Then, the media server operates on the media gathered, packs it into a single stream, and sends it to all peers. Here, every peer sends a single media stream to and receives one media stream from the server. The media server used here is called Multipoint Control Unit (MCU).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multipoint Control Unit (MCU)&lt;/strong&gt;&lt;br&gt;
An MCU server receives media from all peers and reworks it, performing the following functions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Decoding:&lt;/strong&gt; Upon gathering the primary media streams from all peers, the MCU decodes them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Rescaling:&lt;/strong&gt; The decoded videos are rescaled based on the peer’s network conditions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Composing:&lt;/strong&gt; The rescaled videos are combined into a single video stream in a layout requested by that peer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Encoding:&lt;/strong&gt; Finally, the video stream is encoded for delivery to the peer.&lt;/p&gt;

&lt;p&gt;This process is done parallelly for every single, separate peer in the call. This makes it easy for peers to send and receive media as a single stream without spending too much bandwidth.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--02mUL7pd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fkcesf0qedgq0r47vgo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--02mUL7pd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fkcesf0qedgq0r47vgo.jpg" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advantages and Disadvantages of Mixing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The server sends a single media stream to the peer, which makes it possible for devices with lower processing power to participate in the call.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Requires very little resource or bandwidth due to the peer having just a single uplink and downlink.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Server-side recording is possible.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The server requires high processing power and is generally costly to maintain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Peers may experience delays in receiving media packets, as they have to be processed before sending from the server.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While this sounds like a great option, the number of peers in a call directly depends on the performance of the MCU in the Mixing architecture. In reality, it is hard to maintain a WebRTC call with more than 30 peers in Mixing architecture, without the MCU heavily draining server resources.&lt;/p&gt;

&lt;p&gt;Despite this, Mixing remained the most widely used WebRTC architecture until a few years back. However, it has been slowly replaced by the Routing architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Routing Architecture (SFU Server)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this setup, all peers send their media to a central media server. The server forwards the media streams to all other peers separately, without operating on them in any way. Here, every peer sends a single media stream to the server and receives N-1 media streams (where N is the number of peers present in the call) from the server. The media server used here is called the Selective Forwarding Unit (SFU).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selective Forwarding Unit (SFU)&lt;/strong&gt;&lt;br&gt;
An SFU server receives media from all peers in a call. Then, all that media is routed “as is” to every other peer connected to the server. The peers can send more than one media stream to the server, making simulcast possible. The SFU can also be customized to automatically decide which media stream to send to a specific peer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W6xke-8E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fndctqxcjnavhhy6xvqf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W6xke-8E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fndctqxcjnavhhy6xvqf.jpg" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SFU Relay (Distributed SFU)&lt;/strong&gt;&lt;br&gt;
This is a fairly recent development in the domain. SFU Relay servers are simply SFU servers that can communicate with each other. One SFU server can relay media to another, creating a distributed SFU structure. This reduces the load on a single SFU and makes the whole network more scalable. In theory, any server can connect to the SFU relay via an API and receive the routed media.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advantages and Disadvantages of Routing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Less demanding on server resources compared to options like MCU.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Works with asymmetric bandwidth (lower upload rate than download rate) for a peer, as there is only a single uplink with N-1 downlinks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Simulcast is supported for different resolutions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Server-side recording is not possible. But it is possible to route media to a peer that records the streams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The peer device must be good enough to handle multiple downlinks, unlike in Mixing architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It requires complex design and implementation on the server side.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Closing Notes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To wrap up, let’s quickly summarise the architectures and corresponding use cases discussed above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Mesh Architecture with a STUN/TURN server is ideal for calls with 4 or fewer peers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mixing Architecture with an MCU server is good for calls with more than 4 peers. It is highly used in cases where support for legacy devices is a necessity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Routing Architecture with an SFU server is the modern approach to WebRTC video conferencing. As of now, this is the ideal approach to connecting peers in a call when their number exceeds the limits of the Mesh architecture.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is also possible to dynamically switch between different architectures based on the call size, so as to find a balance between app performance and server costs.&lt;/p&gt;

&lt;p&gt;Choosing and implementing the appropriate WebRTC server and architecture is just one side of the coin. Much more is required to make your WebRTC service reliable — optimizing performance, reducing call failure rate, and handling edge cases.&lt;/p&gt;

&lt;p&gt;If you’re planning on writing your own media server, you should also be aware of some basic problems that often show up in the process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Managing peers with a bad network connection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helping peers that cannot support all the mixed codecs running in the call.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handling peer reconnections as well as new peers joining in and existing peers leaving the call.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Algorithms to perform bandwidth estimation for a peer so that the server does not send more data than it can handle.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, even if you choose the right server-side architecture, you will still have a lot to deal with regarding the technicalities of WebRTC.&lt;/p&gt;

&lt;p&gt;If you don’t want to deal with the intricacies of WebRTC but still want to host calls with a gold standard video solution (be video conferencing or streaming), you have options like &lt;a href="https://www.100ms.live/"&gt;100ms&lt;/a&gt; to do the heavy lifting for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;WebRTC with 100ms&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;100ms’ live video SDKs allow you to add live video capabilities to your application with just a few lines of code. With multiple highly-relevant features and a predictable &lt;a href="https://www.100ms.live/pricing"&gt;pricing plan&lt;/a&gt;, you don’t have to worry about dealing with exorbitant server costs for your app.&lt;/p&gt;

&lt;p&gt;If your application requires high-quality video capabilities but you’re unsure about building it from scratch, or you just don’t want to deal with fine-tuning the nitty-gritty of WebRTC, 100ms is your best bet.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Further Reading&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.100ms.live/blog/python-react-webrtc-app"&gt;Build your first WebRTC app with Python and React&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.100ms.live/blog/google-classroom-clone-react-100ms"&gt;Building a Google classroom clone with React and 100ms SDK&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.100ms.live/blog/building-slack-huddle-clone"&gt;Building Slack huddle clone&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.100ms.live/blog/video-chat-app-with-vuejs-and-golang"&gt;Building Video Chat App with VueJs and Golang&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>webrtc</category>
      <category>devops</category>
      <category>developer</category>
    </item>
    <item>
      <title>A New Approach to Live Streaming</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 16 Aug 2022 10:41:28 +0000</pubDate>
      <link>https://dev.to/100mslive/a-new-approach-to-live-streaming-1dnd</link>
      <guid>https://dev.to/100mslive/a-new-approach-to-live-streaming-1dnd</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3HJH6pTb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ugo3e5zp88o6lbm9xa9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3HJH6pTb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ugo3e5zp88o6lbm9xa9p.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The long arc of tech progress has shown that user behavior and technology evolve together: changes in user behavior inspire technology, and then technology drives more of that behavior. The world of live streaming is going through one such change, where new user behavior is driving us to reevaluate the live streaming tech stack.&lt;/p&gt;

&lt;p&gt;Since the first live stream in 1995 where the Yankees were playing the Mariners, live streaming has now become an important medium for users on the Internet to learn, play, shop, and work. Who gets to stream and how they interact with their audience is changing rapidly, and this change is informing our approach to building infrastructure for live streaming.&lt;/p&gt;

&lt;h2&gt;
  
  
  The present-day live streaming tech stack
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rgOYOhZ5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zpcy0gmixz3lzwsizihi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rgOYOhZ5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zpcy0gmixz3lzwsizihi.png" alt="Image description" width="880" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most live streaming apps today are built by combining &lt;a href="https://en.wikipedia.org/wiki/Real-Time_Messaging_Protocol"&gt;RTMP&lt;/a&gt; encoded media streams at the streamer’s end and &lt;a href="https://en.wikipedia.org/wiki/HTTP_Live_Streaming"&gt;HLS&lt;/a&gt; streams at the viewer’s end. An industry of media servers in the middle exists to transcode the input format into the output stream.&lt;/p&gt;

&lt;p&gt;RTMP is a mature protocol that was originally built to support Adobe Flash. Given its maturity, RTMP is widely supported by encoding software and hardware, which can ingest raw device streams and output RTMP streams. RTMP is also fast: it optimizes for reduced latency.&lt;/p&gt;

&lt;p&gt;RTMP used to work well on the viewer’s end too, given that it was the preferred streaming protocol for Adobe Flash. However, as Flash usage went down and HTML5 emerged, HLS became a better fit for viewers. HLS is built over HTTP and is widely supported across all mobile and desktop devices.&lt;/p&gt;

&lt;p&gt;The combination of RTMP and HLS has worked well given the asymmetry in live streaming personas: there are many more viewers who require frictionless viewing and there are only a few streamers who need to configure specialized encoding software (like OBS).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;So what’s changing?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;User behavior around live streaming is changing in 3 big ways: democratization, interactivity, and creator collaboration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b0eiaRW8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lbqis3kjvg2os70nmgfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b0eiaRW8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lbqis3kjvg2os70nmgfd.png" alt="Image description" width="880" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Democratization&lt;/strong&gt;&lt;br&gt;
Live streaming has been democratized and is no longer limited to professional streamers using sophisticated equipment connected to reliable broadband. Everyone is now streaming live with Instagram and YouTube, and from mobile devices connected to unreliable networks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interactivity&lt;/strong&gt;&lt;br&gt;
Live streams are no longer one-way broadcasts. Streamers and viewers are looking for ways to engage and interact with each other. Chat and emoji reactions running alongside live streams are now table-stakes.&lt;/p&gt;

&lt;p&gt;More recently, we have come across scenarios where viewers get “promoted” into becoming streamers. This enables new stream formats and increases the engagement between streamers and viewers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creator collaboration&lt;/strong&gt;&lt;br&gt;
Streamers are also experimenting with newer formats that involve collaborating with other streamers. As &lt;a href="https://ping.gg/"&gt;Ping Labs&lt;/a&gt; puts it, video calls have now become video content.&lt;/p&gt;

&lt;p&gt;The pandemic has accelerated these changes. Live streaming creation and viewership shot upwards, and that motivated more streams, more interactivity and more experimentation with stream formats.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What will happen to the live streaming tech?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mcQr3k6---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gut0egyefeu7xmb5oc99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mcQr3k6---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gut0egyefeu7xmb5oc99.png" alt="Image description" width="880" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Live streaming works well on the viewer’s end. &lt;a href="https://www.cloudflare.com/en-gb/learning/video/what-is-http-live-streaming/"&gt;HLS&lt;/a&gt;, and similar protocols like &lt;a href="https://www.cloudflare.com/learning/video/what-is-mpeg-dash/"&gt;MPEG-DASH&lt;/a&gt;, have democratized viewership by building on top of HTTP. Anyone with a web browser or a smartphone can view live streams today.&lt;/p&gt;

&lt;p&gt;It's time something similar happens on the streamer’s end. The solution to democratization, interactivity, and creator collaboration will be found in WebRTC becoming an alternative to RTMP in the live streaming tech stack.&lt;/p&gt;

&lt;p&gt;Given the times it was designed in, RTMP is unsuitable for streaming from mobile devices. It is built over TCP and assumes a fixed encoding bitrate. When the device runs into network disruptions, an RTMP encoder keeps producing output, which further chokes the network. WebRTC is a more modern protocol: it’s built over UDP and can adjust encoding bitrates based on network feedback.&lt;/p&gt;

&lt;p&gt;WebRTC is also more widely available. Any modern web browser today can encode WebRTC streams without requiring any additional software. Native apps on iOS and Android also support WebRTC well.&lt;/p&gt;

&lt;p&gt;WebRTC is also built for interactivity, given that it was originally a solution to real-time video conferencing. Chat and other forms of interactivity are easy to achieve on top of WebRTC. It is also possible to invite HLS viewers as WebRTC participants, which makes it suitable for advanced interactivity scenarios, where the viewer is promoted into becoming a streamer.&lt;/p&gt;

&lt;p&gt;Given its roots in conferencing, WebRTC also supports creator collaboration out of the box. Streamers can join in from different device platforms given that WebRTC is everywhere.&lt;/p&gt;

&lt;p&gt;The worlds of live video will merge&lt;br&gt;
Given the evolution in user behavior it is time these world began to merge. Streaming use-cases will leverage conferencing tech to introduce interactivity and other benefits. Conferencing will leverage streaming tech to scale video calls to many viewers in near real-time.&lt;/p&gt;

&lt;p&gt;Get access&lt;br&gt;
We are a team that has built live video products in companies like Disney+ and Facebook and are now applying that expertise to enable thousands of developers to build live video apps.&lt;/p&gt;

&lt;p&gt;We are excited by the creativity of our customers who are imagining new use-cases of live video every day. Developers and product managers are building experiences that mix the worlds of conferencing and streaming, and we are building infrastructure to enable them to do more with less.&lt;/p&gt;

&lt;p&gt;If this evolution in live video excites you and is relevant to your needs, try live streaming with 100ms. &lt;a href="https://www.100ms.live/"&gt;Sign up&lt;/a&gt; to get started and join our &lt;a href="https://100ms.live/discord"&gt;Discord community&lt;/a&gt; to connect with us. We look forward to seeing what you build.&lt;/p&gt;

</description>
      <category>livestreaming</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>How 100ms tests for Network Reliability</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 09 Aug 2022 14:37:00 +0000</pubDate>
      <link>https://dev.to/100mslive/how-100ms-tests-for-network-reliability-309i</link>
      <guid>https://dev.to/100mslive/how-100ms-tests-for-network-reliability-309i</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lq8TsYUw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yg4v6q7zg95c4nj6brfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lq8TsYUw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yg4v6q7zg95c4nj6brfd.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From virtual classrooms to business meetings, shopping to dating apps, video is quickly becoming the de-facto communication mode online.&lt;/p&gt;

&lt;p&gt;Innovative developers and product thinkers are looking to create engaging live experiences in their applications. So naturally, it's critical that the audio-video SDK they build these experiences on top of provides a stable, extensible, and scalable bedrock.&lt;/p&gt;

&lt;p&gt;Among the many factors to consider before purchasing an audio/video SDK, network reliability stands out. After all, nobody enjoys running a twenty-minute monolog on a video call only to realize your network was down the entire time…&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Testing Network Reliability for Real-World Scenarios&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this article, we've downloaded, deployed, and tested the reliability of the &lt;a href="https://www.100ms.live/"&gt;100ms React SDK&lt;/a&gt;. To do so, we designed a series of tests that simulate common scenarios in real-life. Of course, since that's not fun enough, we decided to unleash our “full crazy” by battle testing each round against extreme conditions.&lt;/p&gt;

&lt;p&gt;The tests verify how the 100ms SDK fares across three parameters that define network reliability: low bandwidth, network blips &amp;amp; network switching.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Network Reliability Matters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the real world, individuals often have to deal with unstable or less-than-ideal network conditions. This happens when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;moving from one network area to another while traveling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;suddenly experiencing slow internet because of an expiring data pack&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;suddenly experiencing call disconnection for a few seconds due to issues in the larger infrastructure&lt;br&gt;
Network connectivity issues occur more often than we think. Video SDKs need to, at best, be resilient to these issues, and, at worst, provide developers with tools to deal with them gracefully.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;&lt;br&gt;
100ms has a sample React app (100ms 2.0 Sample React App) meant to facilitate the testing of its SDK. We deployed it on &lt;a href="https://www.heroku.com/"&gt;Heroku&lt;/a&gt; and exposed it to a few commonly occurring end-user scenarios.&lt;/p&gt;



&lt;p&gt;&lt;a href="https://github.com/100mslive/100ms-web"&gt;https://github.com/100mslive/100ms-web&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;We had to generate some credentials from the 100ms console and then deployed this example React app on Heroku.&lt;/p&gt;

&lt;p&gt;The SDK was deployed and tested on the Chrome browser running on macOS Monterey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conditions and cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All these tests were 1:1 calls, performed with 2 people in the room. A few details about each test before we get into the results:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Low Bandwidth Test&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Network speed varies across devices. For instance, users operating on 4G mobile data often experience a volatile network, as it tends to vary in speed and stability. In this test, we checked how 100ms handles calls with varying connection speeds on low bandwidth.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Network Blip Test&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Network crises can happen in the middle of a call. In this test, we checked how 100ms handles the sudden loss of network connectivity followed by automatic reconnection.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Network Switching Test&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is common for users to switch between networks inadvertently. For example, they might be on a call while moving between state lines or from a city to the countryside, which may affect network strength.&lt;/p&gt;

&lt;p&gt;Network switching usually occurs when you move away from the range of one network to another or when you switch between your available networks for a higher speed. In this test, we checked how 100ms handles a network switch.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;1. Low Bandwidth Handling/Management Test&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Audio/Video applications need to handle usage across varying network bandwidths. In this section, we monitor how 100ms handles calls for users with low bandwidth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Methodology for the Low Bandwidth Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We used Network Link Conditioner to emulate different network conditions. We set the ideal resolution to 640x360, and tested the app on 4 different configurations: 300 Kbps, 500 Kbps, 800 Kbps, and 1 Mbps, switching from one to another in the middle of a call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test Results&lt;/strong&gt;&lt;br&gt;
The 100ms SDK handles the drop in bandwidth by prioritizing audio/video upload for other peers instead of audio/video download.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If the network is adequate (800 Kbps), the video of active or recent speakers continues to be visible. The audio remains perfectly functional.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the network is poor, only peer audio is functional while their video degrades.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the device facing poor network conditions, the video is somewhat degraded but not entirely non-functional. At lower bandwidths (500Kbps and 300Kbps), audio quality remains functional for all other peers in the meeting and only sees a drop for the attendee experiencing bandwidth constraints.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wrMUfJFK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7295o98et182cw7rnhbw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wrMUfJFK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7295o98et182cw7rnhbw.png" alt="Image description" width="519" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/1q3Q1g-Ibkc"&gt;
&lt;/iframe&gt;
 &lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;2. Network Blip Test&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, we check how 100ms handles call connectivity when a user’s network connection gets switched off, or drops for several seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Methodology for the Network Blip Test&lt;/strong&gt;&lt;br&gt;
First, we check the call by switching off the internet connection for 10 seconds. This is done by toggling the connected wifi network from the menu bar and connecting back by re-toggling the same.&lt;/p&gt;

&lt;p&gt;Then, we iteratively repeat the same test for 20, 30, 45, and 60 seconds. While doing so, we observe the state of the call connection and how the app behaves during disconnection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test Results&lt;/strong&gt;&lt;br&gt;
The 100ms SDK reconnects every time when internet is disabled for 10, 20 and 30 seconds. When switched off for 45 and 60 seconds, the app tries to reconnect for 35s before disconnecting entirely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XT3SzeVx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xoc16312y4yrpet3px2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XT3SzeVx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xoc16312y4yrpet3px2y.png" alt="Image description" width="515" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/4fgVMhAcQLw"&gt;
&lt;/iframe&gt;
 &lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;3. Network Switching Test&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Apps are often exposed to different network conditions in the real world. In this case, we’ve tested how the 100ms SDK reacts when the app moves from one network strength to another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Methodology for the Network Switching Test&lt;/strong&gt;&lt;br&gt;
This test checks how 100ms handles connection when switching from one network to another. We tested the app in 3 Wi-Fi networks: &lt;br&gt;
2.5G and 5G from the same router, and a mobile hotspot.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To start the call, we connected to the Wifi 2.5G network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, we switched from Wifi 2.5G to Wifi 5G.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, we switched back to Wifi 2.5G.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, we repeated the same process, switching to and from Wifi 2.5G and the Mobile Hotspot.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We waited for the call to reconnect during every network switch and monitored the time (in seconds) it took for the reconnection to occur.&lt;/p&gt;



&lt;p&gt;Some of the flawed behavior in the ‘Wifi 2.5G to Hotspot’ test section might be due to the unstable 4G network connection we experienced while testing.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Test Results&lt;/strong&gt;&lt;br&gt;
The 100ms SDK manages to reconnect after every network switch. Sometimes the video reconnects after the audio. The average reconnection time when switching within the same network is 9.1s for audio and 10s for video. The time for reconnection between 2 different networks is 19.2s for audio and 13.8s for video.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6k9v1rRf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hui78ksn66rc7upd4fo5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6k9v1rRf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hui78ksn66rc7upd4fo5.png" alt="Image description" width="516" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/0Dz8mRmhR5U"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Closing Notes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Given the centrality of reliability when it comes to choosing an audio-video SDK, we decided to lay all our cards on the table and reveal exactly how we fare in diverse network, bandwidth and end-user circumstances. Across all tests 100ms fared well under regular usage conditions. In some cases, like bandwidth drops, the SDK allows for graceful handling of degradation issues.&lt;/p&gt;

&lt;p&gt;Of course, as an SDK provider, we pride ourselves for making 100ms even more bullet-proof, so we can’t wait to elegantly solve across all these conditions and meet you again with even more aggressive scenarios.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>react</category>
      <category>testing</category>
    </item>
    <item>
      <title>HLS 101: What it is, How it works &amp; When to use it</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 09 Aug 2022 13:06:18 +0000</pubDate>
      <link>https://dev.to/100mslive/hls-101-what-it-is-how-it-works-when-to-use-it-4o1g</link>
      <guid>https://dev.to/100mslive/hls-101-what-it-is-how-it-works-when-to-use-it-4o1g</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IJ79JkZ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3eg1hps3d0ddvbk6f0ud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IJ79JkZ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3eg1hps3d0ddvbk6f0ud.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you consume online content (and if you are alive in 2022, you probably do), chances are that you’ve watched quite a few live streams. Be it online classes, sporting events, fitness lessons, or celebrity interactions, live streaming has quickly become the go-to source of learning and entertainment.&lt;/p&gt;

&lt;p&gt;Live streamers comprised over 1/3rd of all internet users in March and April 2020, with only 1 in 10 people in the US and UK streaming live content of their own. Just two years since, almost 82% of internet use is expected to be devoted to streaming video by 2022.&lt;/p&gt;

&lt;p&gt;A vast majority of live streaming applications are built on a protocol called HTTP Live Streaming, or HLS. In fact, if you’ve ever watched an Instagram live stream or tuned into the Super Bowl on the NBC Sports App, chances are, you’ve been touched by the magical hands of HLS.&lt;/p&gt;

&lt;p&gt;So if you are looking to build that kind of sophisticated live streaming experience inside your app, this article should give you a comprehensive understanding of the HLS protocol and everything in it.&lt;/p&gt;

&lt;p&gt;Read on to learn the basics of HLS, what it is, how it works, and why it matters for live streamers, broadcasters, and app developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is HLS?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;HLS stands for HTTP Live Streaming. It is a media streaming protocol designed to deliver audio-visual content to viewers over the internet. It facilitates content transportation from media servers to viewer screens — mobile, desktop, tablets, smart TVs, etc.&lt;/p&gt;

&lt;p&gt;Created by Apple, HLS is widely used for distributing live and on-demand media files. For anyone who wants to adaptively stream to Apple devices, HLS is the only option. In fact, if you have an App Store app that offers video content longer than 10 minutes or is heavier than 5MB, HLS is mandatory. You also have to provide one stream, at the very least, that is 64 Kbps or lower.&lt;/p&gt;

&lt;p&gt;Bear in mind, however, that even though HLS was developed by Apple, it is now the most preferred protocol for distributing video content across platforms, devices, and browsers. HLS enjoys broad support among most streaming and distribution platforms.&lt;/p&gt;

&lt;p&gt;HLS allows you to distribute content and ensure excellent viewing experiences across devices, playback platforms, and network conditions. It is the ideal protocol for streaming video to large audiences scattered across geographies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A little history&lt;/strong&gt;&lt;br&gt;
HLS was originally created by Apple to stream to iOS and Apple TV devices, Mac on OS X in Snow Leopard, and later OSes.&lt;/p&gt;

&lt;p&gt;In the early days of video streaming, Realtime Messaging Protocol (RTMP) was the de-facto standard video protocol for streaming video over the internet. However, with the emergence of HTML5 players that supported only HTTP-based protocols, RTMP became inadequate for streaming.&lt;/p&gt;

&lt;p&gt;With the rising dominance of mobile and IoT in the last decade, RTMP took a hit due to its inability to support native playback in these platforms. The Flash Player has to give away ground to HTML5, which resulted in a decline in Flash support across clients. This further contributes to RTMP’s unsuitability for modern video streaming.&lt;/p&gt;

&lt;p&gt;Read More: &lt;a href="https://www.100ms.live/blog/rtmp-vs-webrtc-vs-hls-live-streaming-protocols"&gt;RTMP vs WebRTC vs HLS: Battle of the Live Video Streaming Protocols&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In 2009, Apple developed HLS, designed to focus on the quality and reliability of video delivery. It was an ideal solution for streaming video to devices with HTML5 players. Its rising popularity also had much to do with its unique features, listed below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Adaptive Bitrate Streaming&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Embedded closed captions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fast forward and rewind&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Timed metadata&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dynamic Ad insertion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Digital Rights Management(DRM)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How does HLS work?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;HLS has become the default way to play video on demand. Here’s how it works: HLS takes one big video and breaks it into smaller segments (video files) whose length varies, depending on what Apple recommends.&lt;/p&gt;

&lt;p&gt;Here’s an example:&lt;/p&gt;

&lt;p&gt;Let’s say there is a one-hour-long video, which has been broken into 10-second segments. You end up with 360 segments. Each segment is a video file ending with .ts. For the most part, they are numbered sequentially, so you end up with a directory that looks as seen below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XJNjk3Jd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n94gbqqsk7p23oatey0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XJNjk3Jd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n94gbqqsk7p23oatey0s.png" alt="Image description" width="707" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The video player downloads and plays each segment as the user is streaming the video. The size of the segments can be configured to be as low as a couple of seconds. This makes it possible to minimize latency for live buffering use cases. The video player also keeps a cache of these segments in case it loses network connection at some point.&lt;/p&gt;

&lt;p&gt;HLS also allows you to create each video segment at different resolutions/bitrates. Take the example above. In this, HLS lets you create:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D6KFa8jS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p29d0j8t9mxzlnd84gbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D6KFa8jS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p29d0j8t9mxzlnd84gbg.png" alt="Image description" width="707" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s what the directory looks like now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZB2akarV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qzc2162yw6u0hl5ohanx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZB2akarV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qzc2162yw6u0hl5ohanx.png" alt="Image description" width="700" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once these segments are created at different bitrates, the video player can actually choose which segments to download and play, depending on the network strength and bandwidth available. That means if you are watching the stream at lower bandwidth, the player picks and plays video segments at 360p. If you have a stronger internet connection, you get the segments at 1080p.&lt;/p&gt;

&lt;p&gt;In the real world that means the video doesn’t get stuck, it just plays at different quality levels. This is called Adaptive Bitrate Streaming (ABR).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Adaptive Bitrate Streaming means in the real world&lt;/strong&gt;&lt;br&gt;
Imagine you’re streaming the Super Bowl live on your phone (because you just had to drive out of town that day). Just as the Rams are racing towards their winning touchdown, you hit a spot of questionable network in the Nevada desert.&lt;/p&gt;

&lt;p&gt;You’d think that means the livestream would basically stop working because your network strength has dropped. But, thanks to ABR, that wouldn’t be the case.&lt;/p&gt;

&lt;p&gt;Instead of ceasing to work, the steam would simply adjust itself to the current network. Let’s say you were watching the stream at 720p. Now, you’d get the same stream at 240p. That means, even though there is a drop in video quality, you would still be able to see Cooper Kupp take his MVP-winning touchdown. HLS would enable this automatically, simply by just adjusting to a lower quality broadcast to match your network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HLS Streaming Components&lt;/strong&gt;&lt;br&gt;
Three major components facilitate an HLS stream:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Media Server,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Content Delivery Network, and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Client-side Video Player&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x-DfAa2R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jc0wnt7qomjg5a3qxkro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x-DfAa2R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jc0wnt7qomjg5a3qxkro.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HLS Server (Media Server)&lt;/strong&gt;&lt;br&gt;
Once audio/video has been captured by input devices like cameras and microphones, it is encoded into a format that video players can translate and utilize: H.264 for video and AAC or MP3 for audio.&lt;/p&gt;

&lt;p&gt;The video is then sent to the HLS server (sometimes called the HLS streaming server) for processing. The server performs all the functions we’ve mentioned — segmenting video files, adapting segments for different bitrates, and packaging files into a certain sequence. It also creates index files that carry data about the segments and their playback sequence. This is information the video player will need to play the video content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content Delivery Network (CDN)&lt;/strong&gt;&lt;br&gt;
With the volume of video content to store, queue, and process, a single video server responding to requests from multiple devices would likely experience immense stress, slow down and possibly crash. This is prevented by using Content Deliver Networks (CDNs).&lt;/p&gt;

&lt;p&gt;A CDN is a network of interconnected servers placed across the world. The main criteria for distributing cached content (video segments in this case) is the closeness of the server to the end-user. Here’s how it works:&lt;/p&gt;

&lt;p&gt;A viewer presses the play button, and their device requests the content. The request is routed to the closest server in the CDN. If this is the first time that particular video segment has been requested, the CDN will push the request to the origin server where the original segments are stored. The origin server responds by sending the requested file to the CDN server.&lt;/p&gt;

&lt;p&gt;Now, the CDN server will not only send the requested file to the viewer but also cache a copy of it locally. When other viewers (or even the same one) request the same video, the request no longer goes to the origin server. The cached files are sent from the local CDN server.&lt;/p&gt;

&lt;p&gt;CDN servers are spread across the globe. This means requests for content do not have to travel countries and continents to the origin server every time someone wants to watch a show.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTML5 Player&lt;/strong&gt;&lt;br&gt;
To view the video files, end-users need an HTML5 player on a compatible device. Ever since Adobe Flash passed into the tech graveyard, HLS has become the default delivery protocol. Getting a compatible player won’t be a challenge since most browsers and devices support HLS by default.&lt;/p&gt;

&lt;p&gt;However, HLS does provide advanced features which some players may not support. For example, certain video players may not support captions, DRM, ad injection, thumbnail previews, and the like. If these features are important to you, make sure whichever player you choose supports them.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Resolving Latency Issues in HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Apple recommended batches of 10-second segments until 2016. That particular spec focused on loading three video segments before the player could start the video. However, with 10-second segments, content distributors suffered a 30-segment latency before playback could begin. Apple did eventually cut down the duration to 6 seconds, but that still left streamers and broadcasters with noticeable latency. Even since then, reducing segment size has been a popular way to drive down latency. By ‘tuning’ HLS with shorter chunks, you can accelerate download times, which speeds up the whole pipeline.&lt;/p&gt;

&lt;p&gt;In 2019 Apple released its own extension of HLS called Low Latency HLS (LL-HLS). This is often referred to as Apple Low Latency HLS (ALHLS). The new standard not only came with significantly lower latency but was also compatible with Apple devices. Naturally, this made LL-HLS a massive success and has been widely adopted across platforms and devices.&lt;/p&gt;

&lt;p&gt;LL-HLS comes with two major changes to its spec which are largely responsible for reducing latency. One is to divide the segments into parts and deliver them as soon as they’re available. The other is to ensure that the player has data about the upcoming segments even before they are loaded.&lt;/p&gt;

&lt;p&gt;A detailed breakdown of LL-HLS is beyond the scope of this article. However, you can find a structured deep-dive into the protocol in this Introduction to Low Latency Streaming with HLS.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;When to use HLS Streaming&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When delivering high-resolution videos with over 3MB: Without HLS, viewing such content usually leads to sub-par user experiences, especially when the user is on an average internet and/or mobile connection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When broadcasting live video from one to millions to reach the broadest audience possible: Not only is HLS supported by most browsers and operating systems, but it also offers ABR, which allows content to be viewed at different network speeds (Cellular, 3G, 4G, LTE, WIFI Low, WIFI High).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When reducing overall costs: HLS reduces CDN costs by delivering the video at optimal bitrate to viewers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When using advanced features in your stream: With HLS, you can leverage ad insertion, DRM, closed captions, adaptive bitrate, and much more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you expect your audience to use Apple devices: HLS enjoys support across devices, but Apple devices support HLS over MPEG-DASH and other alternatives. Additionally, apps on the App Store with over 10-minute videos are required to use HLS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you are concerned with security: HLS power video on demand with encryption (DRM), which helps reduce and avoid piracy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Major Advantages of HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Transcoding for Adaptive Bitrate Streaming&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’ve already explained what ABR is in a previous section. Transcoding here refers to altering video content from one bitrate to another. In the above example, video segments are converted to 1080p, 720p, and 360p from a single, high-resolution stream.&lt;/p&gt;

&lt;p&gt;In the HLS workflow, the video travels from the origin server to a streaming server with a transcoder. The transcoder creates multiple versions of each video segment at different bitrates. The video player picks which version works best with the end-user’s internet and delivers the video accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delivery and Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With HLS, it is much easier to broadcast live video from one to millions. This is because most browsers and OSes support HLS.&lt;/p&gt;

&lt;p&gt;Since HLS can use web servers in a CDN to push media, the digital load is distributed among HTTP server networks. This makes it easy to cache audio-video chunks, which can be delivered to viewers across all locations. As long as they are close to a web server, they can receive video content.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Does 100ms support HLS for live streaming?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;100ms supports live streaming output via HLS. However, we use WebRTC as input for the stream, unlike services that offer the infrastructure for live streaming alone, which generally use RTMP.&lt;/p&gt;

&lt;p&gt;The 100ms live streaming stack combines streaming with conferencing, enabling our customers to build more engaging live streams that support multiple broadcasters, interactivity between broadcaster and viewer, and easy streaming from mobile devices. Since the entire audio/video SDK is packaged into one product, broadcasters can freely toggle between HLS streams and WebRTC, thus allowing two-way interaction while live streaming.&lt;/p&gt;

</description>
      <category>hls</category>
      <category>livestreaming</category>
      <category>developers</category>
      <category>videostreaming</category>
    </item>
    <item>
      <title>Introduction to Low Latency Streaming with HLS</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 09 Aug 2022 12:19:01 +0000</pubDate>
      <link>https://dev.to/100mslive/introduction-to-low-latency-streaming-with-hls-odh</link>
      <guid>https://dev.to/100mslive/introduction-to-low-latency-streaming-with-hls-odh</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O9MZ4813--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/382c2zhmwbnoiw09ahvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O9MZ4813--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/382c2zhmwbnoiw09ahvm.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whether it’s a World Cup match, the Super Bowl, or the French Open finals, watching it with your friends on a Saturday night is #goals. Sadly, not all of us can get tickets and travel across cities, countries, or continents to attend them. Thankfully, live streaming makes it possible to watch all the action, close to real-time.&lt;/p&gt;

&lt;p&gt;But, the only question is “how close to real-time are we talking?”&lt;/p&gt;

&lt;p&gt;Video streaming is largely facilitated on the back of a video protocol called HLS (HTTP Live Streaming). While the origins and fundamentals of HLS are explained in another piece on our blog, the current piece will focus on how HLS resolved one of its greatest shortcomings: latency.&lt;/p&gt;

&lt;p&gt;To start with, let’s take a quick peek at how HLS works.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Way of the HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We will first try to understand how HLS works, and makes live streaming possible. This is what the typical flow of an HLS streaming system looks like:&lt;/p&gt;

&lt;p&gt;The audio/video stream captured by input devices is encoded and ingested into a media server.&lt;/p&gt;

&lt;p&gt;The media server transcodes the stream into an HLS-compatible format with multiple ABR variants and also creates a playlist file to be used by the video players.&lt;/p&gt;

&lt;p&gt;Then, the media server serves the media and the playlist file to the clients, either directly or via CDNs by acting as an origin server.&lt;/p&gt;

&lt;p&gt;The players, on the client end, make use of the playlist file to navigate through the video segments. These segments are typically “slices” of the video being generated, with a definite duration (called segment size, usually 2 to 6 seconds).&lt;/p&gt;

&lt;p&gt;The playlist is refreshed based on segment size and players can select the segments specified in them, based on the order of playback and the video quality they require.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FcVNzuUN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dui54vj45au9uxy2t50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FcVNzuUN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dui54vj45au9uxy2t50.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even though HLS offers a reliable way of video streaming, its high latency levels may pose obstacles and issues for many streamers or video distributors. According to the initial specification, a player should load the media files in advance before playing it. This makes HLS an inherently higher latency protocol with a latency of about 30 to 60 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Tuning HLS for Low Latency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Everyone was interested in implementing HLS but the high latency was a serious roadblock. So, devs and enthusiasts started to find workarounds to reduce latency and refine the protocol for effective usage. Some of these practices offered such positive results that they started becoming a silent standard along with the HLS specification. Two of these practices are listed below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reducing the default segment size&lt;/strong&gt;&lt;br&gt;
When Apple introduced HLS, the typical segment size was 10 seconds. Most HLS implementers found it too long because of which Apple decided to reduce it to 6 seconds. The overall latency can reduced by reducing segment size and the buffer size of the player.&lt;/p&gt;

&lt;p&gt;However, this carries some issues. Some of them include increased overall bitrate, buffering or jitter for devices with inferior network conditions. The ideal segment size should be decided based on the target audience and could be in the range of 2 to 4 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Media Ingest with faster protocols&lt;/strong&gt;&lt;br&gt;
The main reason HLS is used for live streaming is the scalability, reliability and player compatibility it provides across all platforms, especially when compared to other protocols. This has made HLS irreplaceable for video delivery so far.&lt;/p&gt;

&lt;p&gt;But the first mile contribution (also known as ingest) from the HLS stack can be replaced with lower latency protocols to reduce overall latency.&lt;/p&gt;

&lt;p&gt;The HLS ingest is usually replaced by RTMP ingest, which enjoys wide support for encoders/services and has proved to be a cost-effective solution. The stream ingested with RTMP is then transcoded to support HLS with the help of a media server before serving the content. Even though there have been experiments with other protocols such as WebRTC, SRT for the ingest part, RTMP remains the most popular option.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Evolution of HLS to LL-HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The latency in HLS started posing a significant hurdle, leading to less than stellar user experiences. This was becoming more frequent since HLS was being widely adopted around the world. Tuning HLS wasn’t enough and everyone was looking for better and more sustainable solutions.&lt;/p&gt;

&lt;p&gt;It was in 2016 that Twitter’s Periscope engineering team made some major changes to their implementation in order to achieve low latency with HLS. This proprietary version of HLS, often referred to as LHLS, offered latency of 2 to 5 seconds.&lt;/p&gt;

&lt;p&gt;DASH, the main competitor for HLS came up with a low latency solution based on chunked CMAF in 2017, following which a community-based low latency HLS solution (L-HLS) was drafted in the year 2018. This variant was heavily inspired from the Periscope’s LHLS and leveraged Chunked Transfer Encoding (CTE) to reduce latency. This variant is often referred to Community Low Latency HLS (CL-HLS).&lt;/p&gt;

&lt;p&gt;While this version of HLS was gaining popularity, Apple decided to release their own extension of the protocol called Low Latency HLS (LL-HLS) in 2019. This is often referred to as Apple Low Latency HLS (ALHLS). This version of HLS offered low latency comparable to the CL-HLS and promised compatibility with Apple devices. Since then, LL-HLS has been merged into the HLS specification and has technically become a single protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How LL-HLS reduces Latency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, we’ll explore the changes LL-HLS brings to HLS, making low latency streaming possible. This protocol came with 2 main changes in spec, responsible for its low latency nature. One is to divide the segments into parts and deliver them as soon as they’re available. The other is to inform the player about the data to be loaded next before said data is even available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dividing Segments into Parts&lt;/strong&gt;&lt;br&gt;
The video segments are further divided into parts (similar to chunks used in CMAF). These parts are just “smaller segments” with a definite duration — represented with EXT-X-PART tag in the media playlist.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ov0Hi5-s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ll2cr5r381bmg3ekqvqd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ov0Hi5-s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ll2cr5r381bmg3ekqvqd.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The players can fill up their buffer more efficiently by publishing the parts while the segment is being generated. Reducing the buffer size on the player side using this approach, results in reduced latency. These parts are then collectively replaced with their respective segments upon completion, which will remain available for a longer period of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Preload Hints&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When LL-HLS was first introduced, it had HTTP/2 push specified as a requirement on the server side for sending new data to clients. Many commercial CDN providers were not supporting this feature at the time, which resulted in a lot of confusion.&lt;/p&gt;

&lt;p&gt;This issue was addressed by Apple in a subsequent update, replacing the HTTP/2 push with preload hints. They decided to include support for preload hints by adding a new tag EXT-X-PRELOAD-HINT to the playlist, reducing overhead.&lt;/p&gt;

&lt;p&gt;With the help of preload hint, a video player can anticipate the data to be loaded next and can send a request to URI from the hint to gain faster access to the next part/data. The servers should block all requests for the preload hint data and return them as soon as the data becomes available, thus reducing latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A look at the LL-HLS Media Playlist&lt;/strong&gt;&lt;br&gt;
Now, let’s take a look at how these tags are specified in the media playlist file, using an example. We will assume the segment size to be 6 seconds and the part size to be 200 milliseconds. We will also assume that 2 segments (segment A and B) have been completely played, while the 3rd segment (segment C) is still being generated. This segment is being published as a list of parts in the order of playback because it has not yet been completed.&lt;/p&gt;

&lt;p&gt;The following is a sample media playlist (M3U8 file).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kwjoXPJX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/81dhz1ap95iudc2kjyd0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kwjoXPJX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/81dhz1ap95iudc2kjyd0.PNG" alt="Image description" width="707" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Players that don’t support LL-HLS yet tend to ignore tags like EXT-X-PART and EXT-X-PRELOAD-HINT, enabling them to treat the playlist with the traditional HLS and load segments at a higher latency.&lt;/p&gt;

&lt;h2&gt;
  
  
  **Low-Latency HLS on non-Apple devices
&lt;/h2&gt;

&lt;p&gt;**The new and improved HLS has a latency of about 3 seconds or less. The only reasonable competition for this protocol is LL-DASH. But Apple does not support DASH on all of its devices. This makes LL-HLS the only low latency live streaming protocol that has wide client-side support including Apple devices.&lt;/p&gt;

&lt;p&gt;One of the main advantages of using LL-HLS is its backward compatibility with legacy HLS players. The players that don’t support this variant may fall back to standard HLS and still work with higher latency. Since this protocol required players to start loading unfinished media segments instead of waiting until they become fully available, the changes in the spec made it difficult to adapt it quickly for all players.&lt;/p&gt;

&lt;p&gt;It took a while for most non-Apple devices to start supporting LL-HLS. Now, it is widely supported across almost all platforms with relatively newer versions of players. Even though some of them have been planning the support for the protocol since its inception, most of them are new and are improving their compatibility at the moment.&lt;/p&gt;

&lt;p&gt;Here are some popular players from different platforms that support LL-HLS in its entirety:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AVPlayer (iOS)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exoplayer (Android)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;THEOPlayer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;JWPlayer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HLS.js&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VideoJS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AgnoPlay&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Comparing LL-HLS, LL-DASH and WebRTC&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here, we compare three protocols LL-HLS, LL-DASH and WebRTC on six parameters: compatibility, delivery method, support for ABR, security, latency, best use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compatibility&lt;/strong&gt;&lt;br&gt;
LL-HLS provides good support for all Apple devices and browsers. It has been gaining support for most non-Apple devices.&lt;br&gt;
LL-DASH supports most non-Apple devices and browsers but is not supported on any Apple device.&lt;br&gt;
WebRTC is supported across all popular browsers and platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delivery Method&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, let’s go through a few relevant terms used with CMAF.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chunked Encoding (CE)&lt;/strong&gt; is a technique used for making publishable “chunks”. When added together, these chunks create a video segment. Chunks have a set duration and are the smallest unit that can be published.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chunked Transfer Encoding (CTE)&lt;/strong&gt; is a technique used to deliver the “chunks” as they are created in a sequential order. With CTE, one request for a segment is enough to receive all its chunks. The transmission ends once a zero-length chunk is sent. This method allows even small chunks to be used for transfer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;LL-HLS uses Chunked Encoding to create “parts” or “chunks” of a segment. But, instead of using Chunked Transfer Encoding, this protocol uses its own method of delivering chunks over TCP. The client has to make a request for every single part, instead of just requesting the whole segment and receiving it in parts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LL-DASH uses Chunked Encoding for creating chunks and Chunked Transfer Encoding for delivering them over TCP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WebRTC uses Real-time Transfer Protocol (RTP) for sending video and audio streams over UDP.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Support for Adaptive Bitrate (ABR)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Adaptive Bitrate (ABR) is a technique for dynamically adjusting the compression level and video quality of a stream to match bandwidth availability. It heavily impacts the video streaming experience for the viewer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;LL-HLS has support for ABR.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LL-DASH has support for ABR.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WebRTC doesn’t support ABR. But, a similar technique called Simulcast is used for dynamically adjusting video quality.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;br&gt;
Both LL-HLS and LL-DASH support media encryption and benefit from security features such as token authentication and digital rights management (DRM).&lt;/p&gt;

&lt;p&gt;WebRTC supports end-to-end encryption of media for transfer, user, file, and round-trip authentication. This is often sufficient for DRM purposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt;&lt;br&gt;
Both LL-HLS and LL-DASH have a latency of 2 to 5 seconds.&lt;/p&gt;

&lt;p&gt;WebRTC, on the other hand, has a sub-second latency of ~500 milliseconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Use Case&lt;/strong&gt;&lt;br&gt;
Both LL-HLS and LL-DASH are best suited for live streaming events that need to be delivered to millions of viewers. They are often used for streaming sporting events live.&lt;/p&gt;

&lt;p&gt;WebRTC is very frequently used for solutions such as video conferencing that require minimal latency and are not expected to scale to a big number.&lt;/p&gt;

&lt;p&gt;Now that the HLS supports low latency streaming, it is all set to conquer the video streaming space, ready to serve millions of fans watching their favourite team play a crucial match without any issues. Whether you want to start live streaming yourself or build an app that facilitates live streaming, LL-HLS remains your best friend.&lt;/p&gt;

</description>
      <category>livestreaming</category>
      <category>videostreaming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Roles on 100ms: Mapping real-world interactions to live video with a few clicks</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 09 Aug 2022 07:02:00 +0000</pubDate>
      <link>https://dev.to/100mslive/roles-on-100ms-mapping-real-world-interactions-to-live-video-with-a-few-clicks-461o</link>
      <guid>https://dev.to/100mslive/roles-on-100ms-mapping-real-world-interactions-to-live-video-with-a-few-clicks-461o</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AeW9YNva--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zd686pxudggywl8xj1zd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AeW9YNva--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zd686pxudggywl8xj1zd.jpg" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On this world's stage, we all have a role to play. Turns out, the same is true for the world of live, interactive video applications.&lt;/p&gt;

&lt;p&gt;If you're building a live video app, a meeting room will contain people who will need to perform different functions. For example, in a virtual classroom, the teacher is able to display their video and audio, as well as share their screen. Depending on the app, they can also display a student's screen to the rest of the class, allow the student to address the class, and more.&lt;/p&gt;

&lt;p&gt;All this while, the student may only be able to watch and listen to the teacher's video. They may only be able to share their screen or speak when the teacher allows them to, reducing interruptions or chaos in an ongoing class - especially if the class is large.&lt;/p&gt;

&lt;p&gt;However, building these permissions (what participants can and cannot do within meetings) are difficult because they usually require some coding and implementation effort to set up. The more nuanced and varied the permissions in an app, the more effort devs have to expend to ensure that the final app offers the exact features required by end-users.&lt;/p&gt;

&lt;p&gt;At 100ms, we call these permissions "roles". The teacher's role (in the above example) is to share audio/video, share their screen, allow students to ask questions or address the class, and more. The student's is to view the teacher's video, listen to their audio and perhaps share their screen, ask questions, and speak to the class - when the teacher enables them to.&lt;/p&gt;

&lt;p&gt;This article will delve into what these "roles" are, how they make your life easier, and how they are an improvement upon the industry standard "publish/subscribe" logic.&lt;/p&gt;

&lt;p&gt;But, let's start with the obvious question.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is a role?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before answering this question, we want to lay out 100ms' mission: bring the world closer by enabling real, life-like, live conversations virtually. We want our customers to be able to offer their users an online interactive experience that is as close to the real-world as possible.&lt;/p&gt;

&lt;p&gt;This is where roles comes in. They allow the easy recreation of real-life interactions on video, as this article will demonstrate with an example. In 100ms terminology, &lt;a href="https://www.100ms.live/docs/flutter/v2/foundation/templates-and-roles"&gt;a role is a collection of permissions&lt;/a&gt; that allow users to perform certain tasks while being part of the meeting room. Essentially, the role determines whether a user in the room has publish/subscribe permissions. It determines whether they can speak, mute other users, be muted, share their screen, etc.&lt;/p&gt;

&lt;p&gt;Before moving on, let us understand roles better by diving into the composing features:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Publishing rights:&lt;/strong&gt; The term "publish" here refers to a user's ability to share audio, video and if needed, their screen, when in a video call. The user's particular role decides whether they can share their audio/video/screen to the room.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subscribing rights:&lt;/strong&gt; The term "subscribe" here refers to a user's ability to view and listen to the video and audio being shared ("published") by others in the room. Depending on the use-case, they may only be able to subscribe to one person's (host) audio-video or to multiple users' streams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Permissions and Power:&lt;/strong&gt; Developers and specific users of a video app should be able to configure and manipulate roles (others' and their own). This would let them perform actions such as muting/unmuting others in the room, letting them share their screen or expelling them from the meeting altogether.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How Roles Work: An Example&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s take a closer look at roles through a simple example: a virtual events app for online concerts. The real-world experience this app is trying to replicate is denoted by the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jgvAEF8d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sv3sysu4od09mq7anfv1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jgvAEF8d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sv3sysu4od09mq7anfv1.jpeg" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Built with 100ms, this hypothetical app has three roles in place: “stage” (where the artist performs), “audience” (where the viewers watch the performance online) and “backstage” (where the person/people handling tech/logistics keep the show going as expected).&lt;/p&gt;

&lt;p&gt;And, these roles will have the following permissions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PeQimMAt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7i8ci8fv645srvoaz9t7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PeQimMAt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7i8ci8fv645srvoaz9t7.png" alt="Image description" width="880" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Those in the “stage” role can do the following: sing (publish their audio-video), invite audience members onto the stage to interact with. Those in the “audience” role can only view the artist’s stream, unless they have been invited to the “stage”. Those in the backstage can interact with others in the same role, kick out audience members if required, change lights in the artist’s video and of course, view the artist on “stage”.&lt;/p&gt;

&lt;p&gt;A real-world app would have more roles and each role would have more permissions. But this is what roles fundamentally do.&lt;/p&gt;

&lt;p&gt;Now, be it Meet, Twitch, or your cousin's school app, the concept of roles exist within all video SDKs. For example, these are the default roles we see in a typical webinar setup: a host (sometimes more than one depending on the app customer) and multiple participants. The host publishes their audio-video streams, the participants subscribe to the host, and sometimes other participants. The host can share their screen, and so can the participants but only if allowed by the host.&lt;/p&gt;

&lt;p&gt;However, most real-world scenarios need more than two roles to recreate real-world experiences. If you wanted more roles with varied permissions, you’d have to configure the same by coding it from scratch.&lt;/p&gt;

&lt;p&gt;So, what’s the solution? How do we make the process of creating varied roles easier?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The answer: custom roles on 100ms.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why are custom roles required?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As mentioned above, if you want to give users a nuanced, layered experience that matches their day-to-day life, two or three simple roles are not enough. Let’s take another example for this: a video consultation app for doctors and patients.&lt;/p&gt;

&lt;p&gt;In the real world, patients are greeted by a nurse/receptionist who takes their information, they wait in the waiting room, and when the doctor is ready, they are called into the consultation room. With roles limited to host and audience, you can’t do this virtually. You would have to create custom roles for “waiting room”, “nurse” and the like, which is usually time, effort and resource-intensive.&lt;/p&gt;

&lt;p&gt;However, using &lt;a href="https://www.100ms.live/"&gt;100ms&lt;/a&gt;, you can create custom roles with far greater ease. Owing to our built-in customizability and extensibility, you can actually build such a consultation app with zero lines of code. This is demonstrated step-by-step below.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Custom roles in action&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the aforementioned video consultation app, we can customize a user’s journey by modifying who they are publishing to and subscribing to, i.e. whose audio-video they can watch/listen to, and vice-versa.&lt;/p&gt;

&lt;p&gt;Here’s what the patient’s journey looks like: when a patient enters, they initially communicate with a nurse in a virtual “waiting room” where their information (name, gender, DOB, temperature, weight, symptoms) are noted by the nurse. After this, they wait until the doctor is able to communicate with them.&lt;/p&gt;

&lt;p&gt;In the language of roles, they will initially publish and subscribe to someone in the “nurse” role. Then, their role will be changed so that they publish and subscribe to someone in the “doctor” role.&lt;/p&gt;

&lt;p&gt;Using 100ms, devs can map out this exact journey without writing a single line of code—just a few clicks on the dashboard.&lt;/p&gt;

&lt;p&gt;To demonstrate roles in action, let’s put them to a test and visualize a user’s journey to the online “clinic”.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Using Roles to recreate a Patient’s Consultation Experience Online&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go to the 100ms dashboard by creating a new account for free.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Create New. There are multiple templates to choose from, depending on your use case. For a simple video conferencing app, select the ‘video conferencing’ template. But we’ll go with something else for our app.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K9YvjvBY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7u76ydprsvn66njg2wr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K9YvjvBY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7u76ydprsvn66njg2wr.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the ‘Create Custom App’ option.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZmvXOGFu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfo7h4g9thudf00jzfyn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZmvXOGFu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfo7h4g9thudf00jzfyn.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This will let us access the ‘Create Roles’ option so that we can customize roles for our use case.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gOmH89tx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tc8kfw5dn3b1kwdep77r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gOmH89tx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tc8kfw5dn3b1kwdep77r.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For a clinic app, we require 4 roles: consultation-admin, consultation-area, reception-admin, reception-area (details of each role explained below). Click ‘Add a Role’ and name them accordingly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8faBELrF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7pdmwnqob5ux7hhoci0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8faBELrF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7pdmwnqob5ux7hhoci0.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As soon as a new person enters the clinic, they’ll be assigned the “reception-area” role. When the doctor is ready, the role will be changed to “consultation-area”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VocfNpLF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pt6jbq6d4ew1cjyi27vo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VocfNpLF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pt6jbq6d4ew1cjyi27vo.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, we have created the four roles we require: consultation-admin, consultation-area, reception-admin, and reception-area.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZCPmUeNg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hmysgmjzqm9sdioaib6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZCPmUeNg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hmysgmjzqm9sdioaib6.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Handling Permissions for each Role&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;100ms enables users to quickly and easily modify the permissions of different roles and subscription strategies, right from the dashboard.&lt;/p&gt;

&lt;p&gt;In the example, here’s what each role entails:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;reception-area:&lt;/strong&gt; The role assigned to the patient when they first enter the online clinic. This role can publish and subscribe only to the “reception-admin” role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;reception-admin:&lt;/strong&gt; The role assigned to the “nurse” who greets the patient and takes their info. This role can be published and subscribed to the other three roles. They can also change roles from “reception-area” to “consultation-area”. If needed, they can remove the person in the “reception-area” role from the meeting room entirely.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;consultation-area:&lt;/strong&gt; The role assigned to the patient when the doctor is ready to see them. The nurse in the “reception-admin” role changes the patient’s role from “reception-area” to “consultation-area” when the doctor is ready.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;consultation-admin:&lt;/strong&gt; The role assigned to the doctor. This role can publish and subscribe to the “consultation-area” and “reception-admin” roles. If needed, they can expel the person in the “consultation-area” role completely.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On the 100ms dashboard, we can assign mute/unmute, screenshare, publish/subscribe permissions to each of these role with a couple of clicks. We can even give certain roles the ability to change other roles or every remove somebody from the meeting room entirely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qf7jXlMB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j0w1w8ryq8r3tluogq11.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qf7jXlMB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j0w1w8ryq8r3tluogq11.jpeg" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To go back to our example, we start by modifying the nurse role.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As mentioned, when a person walks into a clinic they will automatically be assigned the “reception-area” role. There, someone in the “reception-admin” role greets them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The reception-admin role should be subscribed to the person in the “reception-area” role, and also have permissions to modify the “reception-area” roles, or even remove them if necessary.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pe71DJ9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvj5o5jt53ig32ltbgck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pe71DJ9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvj5o5jt53ig32ltbgck.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A person in the “reception-area” role should be subscribed to the “reception-admin” role, but they will not have permission to modify their roles or add/remove them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sfGz8UR4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6ndzct15aklxstymrdj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sfGz8UR4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6ndzct15aklxstymrdj.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;This will now serve as a waiting room/reception area where the nurse connects with the incoming patients.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, we assign permissions to the “consultation-admin” role. Since the “consultation-admin” can call in a patient, they need to have administrative permissions to modify user roles, mute/unmute them, share their own screen, and the like.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y0PtOFxX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkjdqs5dg9c9o0s95i6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y0PtOFxX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkjdqs5dg9c9o0s95i6v.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The “consultation-admin” will also be subscribed to the “consultation-area” role.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QG_xG_23--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgrums1g9400pn4bback.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QG_xG_23--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgrums1g9400pn4bback.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lastly, we modify the “consultation-area” role. They will be subscribed to the “consultation-admin” role to enable consultation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that done, we have successfully modified user permissions for our use case. Now, we implement the app using nothing but the power of roles and customization provided on the 100ms dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jGbvEwbB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ht8vx180jv7iht6dpfrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jGbvEwbB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ht8vx180jv7iht6dpfrv.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;And just a note, we have been able to do all this without writing a single line of code!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The last step now is to pick a domain. Let’s go ahead with “hospital.app.100ms.live” as the subdomain and click on ‘Set up app’.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;100ms enables you to have a completely personalized subdomain for your app—for free. You can easily host these powerful video templates on your own domain URL with a click.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JbVSAhEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hhh5sae5eze2v310vkgw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JbVSAhEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hhh5sae5eze2v310vkgw.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The app is now ready to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Testing the App&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;With that done, we are ready to test out the app.&lt;/p&gt;

&lt;p&gt;Here’s what our newly set up telehealth app looks like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://youtu.be/webB4efxxyg&lt;/code&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TunobRvX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t58w1u9wtqw718zl81xr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TunobRvX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t58w1u9wtqw718zl81xr.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s it! 🚀&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have successfully implemented a clinic-like experience digitally using nothing but roles on the 100ms dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is just the start.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With custom roles at your fingertips, the applications are limitless. Here are a few quick examples of virtual scenarios you can easily build with roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A digital classroom where a teacher will admit students when required. - The same waiting roles, as depicted above can be used there.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Online performances and events. It would be a simple task to create roles for “backstage”, “stage”, and “audience”—also exemplified above.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Online games: create roles for “dealer”, “player” and “spectator” for poker, for example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Virtual interviews&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Celebrity fanmeets … and so much more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Roles are the Silver Bullet in your App-Building Arsenal&lt;br&gt;
As mentioned before, 100ms seeks to enable easier, more human communication by allowing customers to create interactive video apps that match our regular interactions as closely as possible. This is the whole point of the &lt;a href="https://www.100ms.live/marketplace"&gt;100ms Marketplace&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our customers don’t have to worry about how to set up permissions for users of their apps. They don’t have to work on the basics: coding permissions, publish, and subscribe strategies for specific scenarios. They only have to imagine and conceptualize how user journeys will work, and using roles, developers can set them up with a few clicks—no coding involved.&lt;/p&gt;

&lt;p&gt;In fact, our customers have already achieved this. Have a look at how &lt;a href="https://www.100ms.live/blog/mingout-100ms-reimagine-online-dating"&gt;Mingout, a dating app used roles on 100ms to reimage first dates online&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the developers out there, &lt;a href="https://www.youtube.com/watch?v=W-92AslN-EI&amp;amp;t=462s"&gt;here is a closer, more dev-centric dive into how roles work&lt;/a&gt;, have a look at this video on Building a Clubhouse clone from scratch using React. It starts off by examining the 100ms SDK, and demonstrating how roles ease the process of app building.&lt;/p&gt;

&lt;p&gt;If you’re curious, try it out for yourself. &lt;a href="https://dashboard.100ms.live/register?__hstc=159648061.f079b4acf665d0fbf04f116fc64e1893.1655282117756.1659703352701.1660025193766.89&amp;amp;__hssc=159648061.2.1660025193766&amp;amp;__hsfp=1623975401"&gt;Get Started with 100ms for free&lt;/a&gt;, and play around with roles to bring an imagined app to life with just a few clicks!&lt;/p&gt;

</description>
      <category>videoconference</category>
      <category>developers</category>
      <category>livevideo</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Introduction to Low Latency Streaming with HLS</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Fri, 05 Aug 2022 10:28:08 +0000</pubDate>
      <link>https://dev.to/100mslive/introduction-to-low-latency-streaming-with-hls-27f1</link>
      <guid>https://dev.to/100mslive/introduction-to-low-latency-streaming-with-hls-27f1</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x72zb5o3yjpmxzf0k8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x72zb5o3yjpmxzf0k8i.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whether it’s a World Cup match, the Super Bowl, or the French Open finals, watching it with your friends on a Saturday night is #goals. Sadly, not all of us can get tickets and travel across cities, countries, or continents to attend them. Thankfully, live streaming makes it possible to watch all the action, close to real-time.&lt;/p&gt;

&lt;p&gt;But, the only question is “how close to real-time are we talking?”&lt;/p&gt;

&lt;p&gt;Video streaming is largely facilitated on the back of a video protocol called HLS (HTTP Live Streaming). While the origins and fundamentals of HLS are explained in another piece on our blog, the current piece will focus on how HLS resolved one of its greatest shortcomings: latency.&lt;/p&gt;

&lt;p&gt;To start with, let’s take a quick peek at how HLS works.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Way of the HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We will first try to understand how HLS works, and makes live streaming possible. This is what the typical flow of an HLS streaming system looks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The audio/video stream captured by input devices is encoded and ingested into a media server.&lt;/li&gt;
&lt;li&gt;The media server transcodes the stream into an HLS-compatible format with multiple ABR variants and also creates a playlist file to be used by the video players.&lt;/li&gt;
&lt;li&gt;Then, the media server serves the media and the playlist file to the clients, either directly or via CDNs by acting as an origin server.&lt;/li&gt;
&lt;li&gt;The players, on the client end, make use of the playlist file to navigate through the video segments. These segments are typically “slices” of the video being generated, with a definite duration (called segment size, usually 2 to 6 seconds).&lt;/li&gt;
&lt;li&gt;The playlist is refreshed based on segment size and players can select the segments specified in them, based on the order of playback and the video quality they require.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5fpbry0jf4oijrixnxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5fpbry0jf4oijrixnxf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even though &lt;strong&gt;HLS&lt;/strong&gt; offers a reliable way of video streaming, its high latency levels may pose obstacles and issues for many streamers or video distributors. According to the initial specification, a player should load the media files in advance before playing it. This makes HLS an inherently higher latency protocol with a latency of about 30 to 60 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Tuning HLS for Low Latency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Everyone was interested in implementing HLS but the high latency was a serious roadblock. So, devs and enthusiasts started to find workarounds to reduce latency and refine the protocol for effective usage. Some of these practices offered such positive results that they started becoming a silent standard along with the HLS specification. Two of these practices are listed below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reducing the default segment size&lt;/strong&gt;&lt;br&gt;
When Apple introduced HLS, the typical segment size was 10 seconds. Most HLS implementers found it too long because of which Apple decided to reduce it to 6 seconds. The overall latency can reduced by reducing segment size and the buffer size of the player.&lt;/p&gt;

&lt;p&gt;However, this carries some issues. Some of them include increased overall bitrate, buffering or jitter for devices with inferior network conditions. The ideal segment size should be decided based on the target audience and could be in the range of 2 to 4 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Media Ingest with faster protocols&lt;/strong&gt;&lt;br&gt;
The main reason HLS is used for live streaming is the scalability, reliability and player compatibility it provides across all platforms, especially when compared to other protocols. This has made HLS irreplaceable for video delivery so far.&lt;/p&gt;

&lt;p&gt;But the first mile contribution (also known as ingest) from the HLS stack can be replaced with lower latency protocols to reduce overall latency.&lt;/p&gt;

&lt;p&gt;The HLS ingest is usually replaced by RTMP ingest, which enjoys wide support for encoders/services and has proved to be a cost-effective solution. The stream ingested with RTMP is then transcoded to support HLS with the help of a media server before serving the content. Even though there have been experiments with other protocols such as WebRTC, SRT for the ingest part, RTMP remains the most popular option.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Evolution of HLS to LL-HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The latency in HLS started posing a significant hurdle, leading to less than stellar user experiences. This was becoming more frequent since HLS was being widely adopted around the world. Tuning HLS wasn’t enough and everyone was looking for better and more sustainable solutions.&lt;/p&gt;

&lt;p&gt;It was in 2016 that Twitter’s Periscope engineering team made some major changes to their implementation in order to achieve low latency with HLS. This proprietary version of HLS, often referred to as LHLS, offered latency of 2 to 5 seconds.&lt;/p&gt;

&lt;p&gt;DASH, the main competitor for HLS came up with a low latency solution based on chunked CMAF in 2017, following which a community-based low latency HLS solution (L-HLS) was drafted in the year 2018. This variant was heavily inspired from the Periscope’s LHLS and leveraged Chunked Transfer Encoding (CTE) to reduce latency. This variant is often referred to Community Low Latency HLS (CL-HLS).&lt;/p&gt;

&lt;p&gt;While this version of HLS was gaining popularity, Apple decided to release their own extension of the protocol called Low Latency HLS (LL-HLS) in 2019. This is often referred to as Apple Low Latency HLS (ALHLS). This version of HLS offered low latency comparable to the CL-HLS and promised compatibility with Apple devices. Since then, LL-HLS has been merged into the HLS specification and has technically become a single protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How LL-HLS reduces Latency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, we’ll explore the changes LL-HLS brings to HLS, making low latency streaming possible. This protocol came with 2 main changes in spec, responsible for its low latency nature. One is to divide the segments into parts and deliver them as soon as they’re available. The other is to inform the player about the data to be loaded next before said data is even available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dividing Segments into Parts&lt;/strong&gt;&lt;br&gt;
The video segments are further divided into parts (similar to chunks used in CMAF). These parts are just “smaller segments” with a definite duration - represented with EXT-X-PART tag in the media playlist.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wstuih0kjpthsf0jh8e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wstuih0kjpthsf0jh8e.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The players can fill up their buffer more efficiently by publishing the parts while the segment is being generated. Reducing the buffer size on the player side using this approach, results in reduced latency. These parts are then collectively replaced with their respective segments upon completion, which will remain available for a longer period of time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Preload Hints&lt;/strong&gt;&lt;br&gt;
When LL-HLS was first introduced, it had HTTP/2 push specified as a requirement on the server side for sending new data to clients. Many commercial CDN providers were not supporting this feature at the time, which resulted in a lot of confusion.&lt;/p&gt;

&lt;p&gt;This issue was addressed by Apple in a subsequent update, replacing the HTTP/2 push with preload hints. They decided to include support for preload hints by adding a new tag EXT-X-PRELOAD-HINT to the playlist, reducing overhead.&lt;/p&gt;

&lt;p&gt;With the help of preload hint, a video player can anticipate the data to be loaded next and can send a request to URI from the hint to gain faster access to the next part/data. The servers should block all requests for the preload hint data and return them as soon as the data becomes available, thus reducing latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A look at the LL-HLS Media Playlist&lt;/strong&gt;&lt;br&gt;
Now, let’s take a look at how these tags are specified in the media playlist file, using an example. We will assume the segment size to be 6 seconds and the part size to be 200 milliseconds. We will also assume that 2 segments (segment A and B) have been completely played, while the 3rd segment (segment C) is still being generated. This segment is being published as a list of parts in the order of playback because it has not yet been completed.&lt;/p&gt;

&lt;p&gt;The following is a sample media playlist (M3U8 file).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0vf9ngm1v0tu6st5p0x.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0vf9ngm1v0tu6st5p0x.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Players that don’t support LL-HLS yet tend to ignore tags like EXT-X-PART and EXT-X-PRELOAD-HINT, enabling them to treat the playlist with the traditional HLS and load segments at a higher latency.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Low-Latency HLS on non-Apple devices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The new and improved HLS has a latency of about 3 seconds or less. The only reasonable competition for this protocol is LL-DASH. But Apple does not support DASH on all of its devices. This makes LL-HLS the only low latency live streaming protocol that has wide client-side support including Apple devices.&lt;/p&gt;

&lt;p&gt;One of the main advantages of using LL-HLS is its backward compatibility with legacy HLS players. The players that don’t support this variant may fall back to standard HLS and still work with higher latency. Since this protocol required players to start loading unfinished media segments instead of waiting until they become fully available, the changes in the spec made it difficult to adapt it quickly for all players.&lt;/p&gt;

&lt;p&gt;It took a while for most non-Apple devices to start supporting LL-HLS. Now, it is widely supported across almost all platforms with relatively newer versions of players. Even though some of them have been planning the support for the protocol since its inception, most of them are new and are improving their compatibility at the moment.&lt;/p&gt;

&lt;p&gt;Here are some popular players from different platforms that support LL-HLS in its entirety:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AVPlayer (iOS)&lt;/li&gt;
&lt;li&gt;Exoplayer (Android)&lt;/li&gt;
&lt;li&gt;THEOPlayer&lt;/li&gt;
&lt;li&gt;JWPlayer&lt;/li&gt;
&lt;li&gt;HLS.js&lt;/li&gt;
&lt;li&gt;VideoJS&lt;/li&gt;
&lt;li&gt;AgnoPlay&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Comparing LL-HLS, LL-DASH and WebRTC&lt;/strong&gt;&lt;br&gt;
Here, we compare three protocols LL-HLS, LL-DASH and WebRTC on six parameters: compatibility, delivery method, support for ABR, security, latency, best use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compatibility&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LL-HLS provides good support for all Apple devices and browsers. It has been gaining support for most non-Apple devices.&lt;/li&gt;
&lt;li&gt;LL-DASH supports most non-Apple devices and browsers but is not supported on any Apple device.&lt;/li&gt;
&lt;li&gt;WebRTC is supported across all popular browsers and platforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Delivery Method&lt;/strong&gt;&lt;br&gt;
First, let’s go through a few relevant terms used with CMAF.&lt;/p&gt;

&lt;p&gt;Chunked Encoding (CE) is a technique used for making publishable “chunks”. When added together, these chunks create a video segment. Chunks have a set duration and are the smallest unit that can be published.&lt;/p&gt;

&lt;p&gt;Chunked Transfer Encoding (CTE) is a technique used to deliver the “chunks” as they are created in a sequential order. With CTE, one request for a segment is enough to receive all its chunks. The transmission ends once a zero-length chunk is sent. This method allows even small chunks to be used for transfer.&lt;/p&gt;

&lt;p&gt;-LL-HLS uses Chunked Encoding to create “parts” or “chunks” of a segment. But, instead of using Chunked Transfer Encoding, this protocol uses its own method of delivering chunks over TCP. The client has to make a request for every single part, instead of just requesting the whole segment and receiving it in parts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LL-DASH uses Chunked Encoding for creating chunks and Chunked Transfer Encoding for delivering them over TCP.&lt;/li&gt;
&lt;li&gt;WebRTC uses Real-time Transfer Protocol (RTP) for sending video and audio streams over UDP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Support for Adaptive Bitrate (ABR)&lt;br&gt;
Adaptive Bitrate (ABR) is a technique for dynamically adjusting the compression level and video quality of a stream to match bandwidth availability. It heavily impacts the video streaming experience for the viewer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LL-HLS has support for ABR.&lt;/li&gt;
&lt;li&gt;LL-DASH has support for ABR.&lt;/li&gt;
&lt;li&gt;WebRTC doesn’t support ABR. But, a similar technique called Simulcast is used for dynamically adjusting video quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;br&gt;
Both LL-HLS and LL-DASH support media encryption and benefit from security features such as token authentication and digital rights management (DRM).&lt;/p&gt;

&lt;p&gt;WebRTC supports end-to-end encryption of media for transfer, user, file, and round-trip authentication. This is often sufficient for DRM purposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt;&lt;br&gt;
Both LL-HLS and LL-DASH have a latency of 2 to 5 seconds.&lt;/p&gt;

&lt;p&gt;WebRTC, on the other hand, has a sub second latency of ~500 milliseconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Use Case&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Both LL-HLS and LL-DASH are best suited for live streaming events that need to be delivered to millions of viewers. They are often used for streaming sporting events live.&lt;/p&gt;

&lt;p&gt;WebRTC is very frequently used for solutions such as video conferencing that require minimal latency and are not expected to scale to a big number.&lt;/p&gt;

&lt;p&gt;Now that the HLS supports low latency streaming, it is all set to conquer the video streaming space, ready to serve millions of fans watching their favourite team play a crucial match without any issues. Whether you want to start live streaming yourself or build an app that facilitates live streaming, LL-HLS remains your best friend.&lt;/p&gt;

</description>
      <category>hls</category>
      <category>beginners</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
