<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 100ms Inc.</title>
    <description>The latest articles on DEV Community by 100ms Inc. (@100mslive).</description>
    <link>https://dev.to/100mslive</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/100mslive"/>
    <language>en</language>
    <item>
      <title>Engage Your Audience with Interactive Polls and Quizzes - A Step-by-Step Guide</title>
      <dc:creator>John Selvinraj</dc:creator>
      <pubDate>Mon, 16 Oct 2023 05:41:28 +0000</pubDate>
      <link>https://dev.to/100mslive/engage-your-audience-with-interactive-polls-and-quizzes-a-step-by-step-guide-231m</link>
      <guid>https://dev.to/100mslive/engage-your-audience-with-interactive-polls-and-quizzes-a-step-by-step-guide-231m</guid>
      <description>&lt;p&gt;One of the most requested features from our customers was the ability to create polls and quizzes. We are excited to announce that this feature is now available! &lt;/p&gt;

&lt;p&gt;Polls are a great way to get feedback from your audience, learn more about their interests, and engage them in a two-way conversation. Want to get feedback on the session? Create a poll. Let your audience rate the session as good or bad.&lt;/p&gt;

&lt;p&gt;The questions can be made single-choice or multiple-choice. The feedback example discussed above would be a single-choice question. Whereas if you want to conduct market research on say what social media platform everyone in the room uses. That would require a multiple-choice question.&lt;/p&gt;

&lt;p&gt;Needless to say, polls can enhance audience engagement and add interactivity to virtual meetings, conferences, or live streams. They also enable you to collect well-organized data by posing direct questions.&lt;/p&gt;

&lt;p&gt;Create polls in 100ms rooms in the following way:&lt;/p&gt;

  

&lt;p&gt;Once a poll is created and launched by a peer, other peers with permission to view the polls can interact with it. &lt;/p&gt;

  

&lt;p&gt;Moreover, the Polls can be extended to create quizzes as well. &lt;/p&gt;

&lt;p&gt;To delve a bit deeper, quizzes operate on a similar principle to polls, albeit with a twist. &lt;/p&gt;

&lt;p&gt;In the world of quizzes, questions come with designated correct and incorrect answers, adding an element of assessment and knowledge evaluation to the interactive experience. It can be used to craft educational assessments, test comprehension, and promote active learning through engagement.&lt;/p&gt;

&lt;p&gt;Let’s get into the mix of building things using 100ms Polls and Quizzes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to enable polls and quizzes for your rooms on the 100ms dashboard?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Access the 100ms Dashboard&lt;/strong&gt;: Begin by visiting the &lt;a href="https://dashboard.100ms.live/dashboard"&gt;100ms dashboard&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Choose or Create a Template&lt;/strong&gt;: Select the template where you want to enable polls and quizzes. Alternatively, create a new template by clicking the 'Create Template' button.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access Template Configuration&lt;/strong&gt;: Click on the 'Configure' button within your chosen template.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Assign Poll Creation Privileges&lt;/strong&gt;: Select which role(s) should have the ability to create polls.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Set Permissions&lt;/strong&gt;: In the 'Permissions' section, activate the following toggles:
◦ 'Create polls and quizzes'
◦ 'Read polls and quizzes'&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exclusive Interaction&lt;/strong&gt;: If certain role(s) should only participate in polls without creating them, enable only the 'Read polls and quizzes' toggle for those roles.&lt;/li&gt;
&lt;/ol&gt;

  

&lt;h2&gt;
  
  
  How to add Polls to React sample app?
&lt;/h2&gt;

&lt;p&gt;Let’s add polls to a React sample app to see how easy it is.&lt;/p&gt;

&lt;p&gt;Start by creating a new React project (I’ll use Vite to create one) by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;npm&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="nx"&gt;vite&lt;/span&gt;&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;latest&lt;/span&gt; &lt;span class="nx"&gt;my&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;polls&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt; &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="nx"&gt;react&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This would create a new project folder &lt;code&gt;my-polls-app&lt;/code&gt; in your desired directory with React configured in it.&lt;/p&gt;

&lt;p&gt;Now, to quickly add the 100ms React SDK and enable live audio-video in our app, we will follow the React Quickstart Guide &lt;a href="https://www.100ms.live/docs/javascript/v2/quickstart/react-quickstart"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once done, we should have a React project with audio-video calling using 100ms set up.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The complete React Quickstart Guide code can be found &lt;a href="https://codesandbox.io/s/github/100mslive/100ms-examples/tree/main/web/react-quickstart"&gt;here&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Create a new folder in the &lt;code&gt;src&lt;/code&gt; directory called &lt;code&gt;Components&lt;/code&gt; and add two subfolders &lt;code&gt;Poll&lt;/code&gt; and &lt;code&gt;UI&lt;/code&gt; to it. These folders would have the code for the necessary UI and Poll components.&lt;/p&gt;

&lt;p&gt;We start by creating a &lt;code&gt;Modal.jsx&lt;/code&gt; file inside the &lt;code&gt;UI&lt;/code&gt; folder. This file would have the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Fragment&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;ReactDOM&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react-dom&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;classes&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./Modal.module.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Backdrop&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;backdrop&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="na"&gt;onClick&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onClose&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ModalOverlay&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;modal&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;classes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;portalElement&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;overlays&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Modal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Fragment&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;ReactDOM&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createPortal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Backdrop&lt;/span&gt; &lt;span class="na"&gt;onClose&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onClose&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;portalElement&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
      &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;ReactDOM&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createPortal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ModalOverlay&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;children&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;ModalOverlay&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;,&lt;/span&gt;
        &lt;span class="nx"&gt;portalElement&lt;/span&gt;
      &lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Fragment&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;Modal&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Modal component is made up of the &lt;code&gt;Backdrop&lt;/code&gt; and &lt;code&gt;ModalOverlay&lt;/code&gt; . The &lt;code&gt;ModalOverlay&lt;/code&gt; would provide a white background to lay the children on passed to it using props.&lt;/p&gt;

&lt;p&gt;You’d notice we have imported a css file but haven’t defined it yet. So, inside the same &lt;code&gt;UI&lt;/code&gt; folder create a new file called &lt;code&gt;Modal.module.css&lt;/code&gt; and add the following to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nc"&gt;.backdrop&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;fixed&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;top&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;left&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100vh&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;z-index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;rgba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0.75&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nc"&gt;.modal&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;position&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;fixed&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;top&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20vh&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;left&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;90%&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;background-color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#546e7a&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;14px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;box-shadow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;2px&lt;/span&gt; &lt;span class="m"&gt;8px&lt;/span&gt; &lt;span class="n"&gt;rgba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0.25&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nl"&gt;z-index&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;animation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;slide-down&lt;/span&gt; &lt;span class="m"&gt;300ms&lt;/span&gt; &lt;span class="n"&gt;ease-out&lt;/span&gt; &lt;span class="n"&gt;forwards&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;@media&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;min-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;768px&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nc"&gt;.modal&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;40rem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;left&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;calc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;50%&lt;/span&gt; &lt;span class="n"&gt;-&lt;/span&gt; &lt;span class="m"&gt;20rem&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;@keyframes&lt;/span&gt; &lt;span class="n"&gt;slide-down&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nt"&gt;from&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;opacity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;translateY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;-3rem&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nt"&gt;to&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;opacity&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nl"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;translateY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, to complete our Modal, we also need to modify the &lt;code&gt;index.html&lt;/code&gt; in the root of the project directory to add another &lt;code&gt;div&lt;/code&gt; with &lt;code&gt;id="overlays"&lt;/code&gt; before the root &lt;code&gt;div&lt;/code&gt; as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;html&lt;/span&gt; &lt;span class="na"&gt;lang=&lt;/span&gt;&lt;span class="s"&gt;"en"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;charset=&lt;/span&gt;&lt;span class="s"&gt;"UTF-8"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;link&lt;/span&gt; &lt;span class="na"&gt;rel=&lt;/span&gt;&lt;span class="s"&gt;"icon"&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"image/svg+xml"&lt;/span&gt; &lt;span class="na"&gt;href=&lt;/span&gt;&lt;span class="s"&gt;"/vite.svg"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"viewport"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"width=device-width, initial-scale=1.0"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;100ms Polls Demo&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"overlays"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/div&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;id=&lt;/span&gt;&lt;span class="s"&gt;"root"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/div&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"module"&lt;/span&gt; &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"/src/main.jsx"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we will work on a form to create a poll. Start by creating a form as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Modal&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;.././UI/Modal&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Button&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@100mslive/roomkit-react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useHMSActions&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@100mslive/react-sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../../styles.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;PollForm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;hmsActions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useHMSActions&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setInputs&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;({});&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Modal&lt;/span&gt; &lt;span class="na"&gt;onClose&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onClose&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;form&lt;/span&gt; &lt;span class="na"&gt;onSubmit&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleSubmit&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          Enter a Name for the Poll:
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"input-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt;
              &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"text"&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"name"&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
              &lt;span class="na"&gt;onChange&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleChange&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          Enter a question for the Poll:
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"input-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt;
              &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"text"&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"text"&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
              &lt;span class="na"&gt;onChange&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleChange&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          Enter the First Value:
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"input-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt;
              &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"text"&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"first"&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;first&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
              &lt;span class="na"&gt;onChange&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleChange&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          Enter the Second Value:
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"input-container"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt;
              &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"text"&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"second"&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;second&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
              &lt;span class="na"&gt;onChange&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleChange&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"submit"&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;form&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Modal&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;PollForm&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll notice we are having the user input a name for the poll, a question and two options to choose from. To manage changes in values and successfully submit the form, we add the following functions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handleChange&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;setInputs&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;values&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;values&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handleSubmit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;preventDefault&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;hmsActions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;interactivityCenter&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createPoll&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;poll&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;rolesThatCanViewResponses&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;host&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;})&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;handleCreate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handleCreate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POLL CREATED with ${id}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;hmsActions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;interactivityCenter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addQuestionsToPoll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;single-choice&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;first&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;isCorrectAnswer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;inputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;second&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;isCorrectAnswer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="na"&gt;skippable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;hmsActions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;interactivityCenter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;startPoll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;handleChange&lt;/code&gt; takes any changes in the text input fields and sets the &lt;code&gt;inputs&lt;/code&gt; variable accordingly. On form submission, the &lt;code&gt;handleSubmit&lt;/code&gt; is called which uses the &lt;code&gt;useHMSActions&lt;/code&gt; hook to create a new poll.&lt;/p&gt;

&lt;p&gt;Once a poll is created, &lt;code&gt;handleCreate&lt;/code&gt; is run to &lt;code&gt;addQuestionsToPoll&lt;/code&gt; and also &lt;code&gt;startPoll&lt;/code&gt; using the same &lt;code&gt;hmsActions&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The complete &lt;code&gt;PollForm.jsx&lt;/code&gt; file would then look as &lt;a href="https://github.com/adityathakurxd/live-audio-video/blob/main/src/Components/Poll/PollForm.jsx"&gt;this&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Next, we return to our &lt;code&gt;App.jsx&lt;/code&gt; file to display the &lt;code&gt;PollForm&lt;/code&gt; when the user clicks on a &lt;code&gt;Create Poll&lt;/code&gt; button.&lt;/p&gt;

&lt;p&gt;Start by importing the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Button&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@100mslive/roomkit-react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will use &lt;code&gt;useState&lt;/code&gt; to manage the state of the &lt;code&gt;PollForm&lt;/code&gt; if we need to show or hide it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;pollFormIsShown&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setPollFormIsShownn&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;showPollFormHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;setPollFormIsShownn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;hidePollFormHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;setPollFormIsShownn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the &lt;code&gt;PollForm&lt;/code&gt; component and a button with &lt;code&gt;showPollFormHandler&lt;/code&gt; being passed to it as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;pollFormIsShown&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;PollForm&lt;/span&gt; &lt;span class="na"&gt;onClose&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;hidePollFormHandler&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;}&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Button&lt;/span&gt; &lt;span class="na"&gt;onClick&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;showPollFormHandler&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Create Poll&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done, we should be able to toggle and view the &lt;code&gt;PollForm&lt;/code&gt; and dismiss it by clicking the backdrop area. We should also be able to create and start a poll with the name, question and choices entered by the user.&lt;/p&gt;

&lt;p&gt;Let us now work to show the Poll to any other users using our app.&lt;/p&gt;

&lt;p&gt;We can use the &lt;code&gt;useHMSNotifications&lt;/code&gt; hook to be notified when a new poll is started. We will use this to show a toast using &lt;code&gt;react-toastify&lt;/code&gt; when a new poll has begun.&lt;/p&gt;

&lt;p&gt;Add the package by running: &lt;code&gt;npm install --save react-toastify&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the following imports to the &lt;code&gt;App.jsx&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;selectIsConnectedToRoom&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;selectLocalPeerID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;useHMSActions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;HMSNotificationTypes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;useHMSStore&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;useHMSNotifications&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@100mslive/react-sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ToastContainer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;toast&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react-toastify&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react-toastify/dist/ReactToastify.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialise notifications and localPeerID to use them as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;notification&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useHMSNotifications&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;localPeerID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useHMSStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;selectLocalPeerID&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using &lt;code&gt;useEffect&lt;/code&gt; we check for any new notifications. Add the following code to &lt;code&gt;App.jsx&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;pollNotificationData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setPollNotificationData&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="nx"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;switch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nx"&gt;HMSNotificationTypes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="na"&gt;POLL_STARTED&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;startedBy&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;localPeerID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;NOTIFICATION RECEIVED&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="nx"&gt;setPollNotificationData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="nx"&gt;toast&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`A new Poll is available: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;!`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="nl"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;notification&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, add &lt;code&gt;&amp;lt;ToastContainer /&amp;gt;&lt;/code&gt; to the return of &lt;code&gt;App.jsx&lt;/code&gt; to view a toast when some other remote user creates a new poll. Notice how we have added a check to not show toast to the local peer itself.&lt;/p&gt;

&lt;p&gt;Now, we want to be able to cast our vote on the created poll. To do that we again start by showing a modal with the poll data for the users to vote. To the &lt;code&gt;App.jsx&lt;/code&gt; add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;pollModalIsShown&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setPollModalIsShown&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;showPollModalHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;setPollModalIsShown&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the return, add a &lt;code&gt;ViewPoll&lt;/code&gt; component as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;pollModalIsShown&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ViewPoll&lt;/span&gt; &lt;span class="na"&gt;pollNotificationData&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;pollNotificationData&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;)}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us now work on this component. Create a new file called &lt;code&gt;ViewPoll.jsx&lt;/code&gt; inside the &lt;code&gt;Poll&lt;/code&gt; folder in the &lt;code&gt;Components&lt;/code&gt; directory. First, create a basic form layout with radio buttons using the same &lt;code&gt;Modal&lt;/code&gt; component we had created earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Modal&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../UI/Modal&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Button&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@100mslive/roomkit-react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useHMSActions&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@100mslive/react-sdk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;toast&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react-toastify&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react-toastify/dist/ReactToastify.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ViewPoll&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Modal&lt;/span&gt; &lt;span class="na"&gt;onClose&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onClose&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Poll: &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pollNotificationData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h1&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pollNotificationData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;h3&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

      &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;form&lt;/span&gt; &lt;span class="na"&gt;onSubmit&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleSubmit&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"radio"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt;
              &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"radio"&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pollNotificationData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
              &lt;span class="na"&gt;checked&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;selectedOptionIndex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
              &lt;span class="na"&gt;onChange&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleChange&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
            &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pollNotificationData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt; &lt;span class="na"&gt;className&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"radio"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
            &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;input&lt;/span&gt;
              &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"radio"&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pollNotificationData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
              &lt;span class="na"&gt;checked&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;selectedOptionIndex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
              &lt;span class="na"&gt;onChange&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleChange&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
            &lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pollNotificationData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;
          &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;label&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;div&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nt"&gt;br&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Button&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"submit"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Submit&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nt"&gt;form&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Modal&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;ViewPoll&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And, lastly, add functions to handle changes made in the form and submission as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useHMSActions&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;selectedOptionIndex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setSelectedOptionIndex&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;handleChange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;setSelectedOptionIndex&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handleSubmit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;preventDefault&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;interactivityCenter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addResponsesToPoll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pollNotificationData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;questionIndex&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pollNotificationData&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;questions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;option&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;selectedOptionIndex&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nx"&gt;toast&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Vote done!`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how we are using the &lt;code&gt;useHMSActions&lt;/code&gt; hook to &lt;code&gt;addResponsesToPoll&lt;/code&gt; by passing the poll id obtained from the data passed in as props. &lt;/p&gt;

&lt;p&gt;With this done, our application should now be ready to test!&lt;/p&gt;

  

&lt;blockquote&gt;
&lt;p&gt;The code for this project is available on GitHub &lt;a href="https://github.com/adityathakurxd/live-audio-video"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The React app that we’ve built here allows you to create the poll, add single-choice questions as well, and start it for remote users. Using the 100ms SDK, we can also create multiple-choice polls and quizzes as mentioned in the article before.&lt;/p&gt;

&lt;p&gt;With the ability to curate customized polls, incorporate thought-provoking questions, and initiate real-time participation from remote users through the React app, a world of engaging possibilities opens up. The dynamic fusion of technology and user interaction empowers developers to not only enhance the virtual experience but also extract valuable data and insights.&lt;/p&gt;

&lt;p&gt;Ready to dive in? To grasp the full extent of what's achievable and delve into practical steps, look at our comprehensive documentation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get started on the web &lt;a href="https://www.100ms.live/docs/javascript/v2/how-to-guides/build-interactive-features/polls"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Add Polls to your &lt;a href="https://www.100ms.live/docs/android/v2/how-to-guides/interact-with-room/room/polls"&gt;Android&lt;/a&gt; or &lt;a href="https://www.100ms.live/docs/ios/v2/how-to-guides/interact-with-room/room/polls"&gt;iOS&lt;/a&gt; app.&lt;/li&gt;
&lt;li&gt;Join our &lt;a href="https://discord.com/invite/kGdmszyzq2"&gt;discord server&lt;/a&gt; in case of any doubts and to help us build it further.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Buying Guide: Agora, Twilio, Jitsi (JaaS), Zoom, 100ms</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Thu, 15 Sep 2022 12:48:26 +0000</pubDate>
      <link>https://dev.to/100mslive/buying-guide-agora-twilio-jitsi-jaas-zoom-100ms-3f1l</link>
      <guid>https://dev.to/100mslive/buying-guide-agora-twilio-jitsi-jaas-zoom-100ms-3f1l</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZWnG6p9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p0cgatbd5ysgdaq0ke4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZWnG6p9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p0cgatbd5ysgdaq0ke4b.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Video conferencing has been around forever. But what used to be the sporadic coast-to-coast team catch-up, or the predominantly inside sales call became a household necessity with the 2020 Covid-19 Pandemic. Everyday work. Family reunions. Even doctor consults, workout trainers, and online astrologers. Oh, and never forget the kids jumping in the background, drowning in the virtual class they were supposed to attend.&lt;/p&gt;

&lt;p&gt;That said, smart businesses have been quick to offer video-enabled communication services since, and the trend just seems to be getting started, whether it’s telehealth, test-prep, dating, or shopping. While businesses can obviously look at building this audio-video infra in-house, most find it time and resource-intensive.&lt;/p&gt;

&lt;p&gt;Of course, building a scalable video infrastructure from scratch is no mean feat, unless that’s the primary focus you want for your engineering team. Luckily there are at least a handful of Video SDK providers that offer best-in-class video infrastructure.&lt;/p&gt;

&lt;p&gt;But then, how do you decide which one works best for you?&lt;/p&gt;

&lt;p&gt;To answer that, we decided to put together this Buying Guide.&lt;/p&gt;

&lt;p&gt;First, we’ve handpicked the best of the best video SDKs based on customer reviews, product usage, and capabilities offered. And then, we battled them out on features &amp;amp; functionality, compliance &amp;amp; security, support, and pricing.&lt;/p&gt;

&lt;p&gt;From features, time to respond, implementation help, and total cost of ownership, you should find all the details you need to make an informed buying decision right here.&lt;/p&gt;

&lt;p&gt;The Buying Guide comprises four separate articles. Each article focuses on comparing vendors across a single, relevant parameter. All information for said comparison has been obtained from each vendor’s publicly available documentation.&lt;/p&gt;

&lt;p&gt;Here is a quick summary of each article. Click on the article link for a deep dive.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Functionality and Features&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Each audio-video SDK offers specific features and functions that enable its customers to meet their goals. We’ve listed out the basic offerings across Agora, Twilio, Jitsi (Jitsi as a Service), Zoom, and 100ms below.&lt;/p&gt;

&lt;p&gt;However, you can go deeper and explore how vendors provide specific features such as streaming out with RTMP and HLS, active speaker detection, chat, polls, whiteboard, hand raise, and more in our detailed article.&lt;/p&gt;

&lt;p&gt;Full article: &lt;a href="https://www.100ms.live/blog/buying-guide-features"&gt;Features &amp;amp; Functionality for for Agora, Twilio, Jitsi (JaaS), Zoom &amp;amp; 100ms&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agora&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Users can build the video-calling feature using the SDK, while interactive live streaming can be built using Agora’s Live Streaming SDK. Call recording can be enabled using the dashboard and API.&lt;/p&gt;

&lt;p&gt;Additionally, a noise reduction feature can be built using an additional integration. Agora also has a virtual background extension that allows for background modification. Background blur can be implemented using the SDK.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Twilio&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With Twilio, users can build the video-calling feature using the SDK, while interactive live streaming can be built with Twilio Live. As for noise reduction, it can be built using the SDK. Call recording can be enabled using the dashboard and API. Background modifications can be built using the Twilio video processor SDK.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jitsi (Jitsi as a Service)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Jitsi users can build video-calling, noise reduction, call recording, and background modification using the SDK. With regard to interactive live streaming, there is no explicit mention of it in the JaaS documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zoom&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With Zoom, we have examined the feature offerings of both the Zoom Video SDK as well as the Meeting SDK.&lt;/p&gt;

&lt;p&gt;Zoom Video SDK Users can build video-calling, noise reduction, and background modification features using the SDK. Call recording can be enabled using the dashboard and API. There is no explicit mention of interactive live streams in the Zoom documentation.&lt;/p&gt;

&lt;p&gt;Zoom Meeting SDK Users can build video-calling, noise reduction, and background modification features using the SDK. Call recording can be enabled using the dashboard and API. For interactive live streaming, Zoom allows for the streaming of sessions with up to 10k participants.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;100ms&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;100ms enables users to build video-calling, interactive live-streaming (HLS + WebRTC in a single SDK), background modification and much more using the SDK. Noise reduction is currently available in Beta. 100ms also allows for instant streaming of video conferencing sessions with up to 10k participants.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Compliance and Security&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;With data thefts and security breaches becoming more common with each passing day, it’s important for decision-makers to understand the security features each audio-video SDK offers. To help with that, we’ve listed the compliance certifications that each audio-video infra provider holds, below.&lt;/p&gt;

&lt;p&gt;However, for more details about security features such as access control, enterprise authentication, end-to-end encryption, privacy and encryption of recordings, and audit trails, please take a look at our detailed article.&lt;/p&gt;

&lt;p&gt;Full article: &lt;a href="https://www.100ms.live/blog/buying-guide-compliance-and-security"&gt;Compliance &amp;amp; Security for Agora, Twilio, Jitsi (JaaS), Zoom &amp;amp; 100ms&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agora&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;GDPR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HIPAA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CCPA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;COPPA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ISO/IEC 27001&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SOC 2&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Twilio&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;GDPR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ISO 27001&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AICPA SOC 2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HIPAA&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Jitsi (Jitsi as a Service)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;HIPAA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GDPR compliant for data processors&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Zoom&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SOC 2 Type II&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GDPR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HIPAA&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ISO/IEC 27001:2013&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;100ms&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;SOC2 Type 1 &amp;amp; SOC 2 Type 2&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Support&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Vendor support is as important a factor as the features they provide. Integration and post-integration support extended by these providers will help you when something breaks on the backend, and you need expert support.&lt;/p&gt;

&lt;p&gt;While you can find a short summary explaining the support extended by Agora, Twilio, Jitsi, Zoom, and 100ms below, we’ve also created an elaborate comparison to give you a full overview in terms of Cost of Support, Integration/Account Management Support, Post-Integration Support, and Community Support.&lt;/p&gt;

&lt;p&gt;Full article: &lt;a href="https://www.100ms.live/blog/buying-guide-support"&gt;Support for Agora, Twilio, Jitsi (JaaS), Zoom &amp;amp; 100ms&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agora&lt;/strong&gt;&lt;br&gt;
Agora has different paid support plans - Standard, Premium, and Enterprise. Apart from these, the platform offers one free support plan.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Twilio&lt;/strong&gt;&lt;br&gt;
Twilio has four different support plans — Developer (free), Production (paid), Business (paid), and Personalized (paid). As for the plans, support is based on the volume of usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jitsi (Jitsi as a Service)&lt;/strong&gt;&lt;br&gt;
Jitsi’s open-source version is supported by a large community - the Jitsi community forum. Apart from this, Jitsi’s paid version, 8x8 Jitsi as a Service, includes dedicated support for strategic customers, as explained in their Global Premium Plus Support plan.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zoom&lt;/strong&gt;&lt;br&gt;
Zoom offers dedicated support for developers via Premier Developer Support which offers prioritized developer-specific resources. This guide does not explore Zoom support plans aimed at non-dev users and admins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;100ms&lt;/strong&gt;&lt;br&gt;
All support functions are available to paying customers at no additional cost. 100ms also offers testing support at no extra cost. This includes user testing, load testing, and network/device stress-testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pricing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Pricing is one of the most crucial deciding factors when it comes to purchasing an SDK. As with the other parameters, here’s a quick overview of what each vendor charges.&lt;/p&gt;

&lt;p&gt;However, details on each vendor’s pricing policies are beyond the scope of this piece. We’ve put together a detailed guide that explains the pricing models, pricing for recording, live streaming, add-ons, and more for each of these providers.&lt;/p&gt;

&lt;p&gt;Full Article: &lt;a href="https://www.100ms.live/blog/buying-guide-pricing"&gt;Pricing for Agora, Twilio, Jitsi (JaaS), Zoom &amp;amp; 100ms&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agora&lt;/strong&gt;&lt;br&gt;
Agora’s pricing is based on usage, which includes the number of minutes used, the number of users, and the unit price. However, Agora follows pricing on the basis of aggregate resolution in calls - this has been explained in detail in the main article.&lt;/p&gt;

&lt;p&gt;The unit price per 1,000 minutes is $0.99 (for audio) and $3.99 (for video HD). Pricing for Video Full HD, Video 2K, and Video 2K+ is explained in detail within the main article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Twilio&lt;/strong&gt;&lt;br&gt;
Pricing for Twilio scales with participant minutes. We have examined the pricing of two Twilio video products - Twilio P2P &amp;amp; Twilio Video Groups.&lt;/p&gt;

&lt;p&gt;Twilio P2P Allows up to 3 participants and up to 10 audio-only participants. Priced at $0.0015 per participant per minute.&lt;/p&gt;

&lt;p&gt;Twilio Video Groups Allows users to create video apps for up to 50 participants. Priced at $0.004 per participant minute - defined only by the minutes a user spends connected in a room.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jitsi (Jitsi as a Service)&lt;/strong&gt;&lt;br&gt;
Jitsi’s pricing is based on a per-user model and charges for monthly active users (MAU). According to Jitsi, an MAU is a user who attends at least one meeting with at least one other user within a particular month. An MAU is also tracked on the basis of the device they log in from.&lt;/p&gt;

&lt;p&gt;JaaS offers various plans, and the pricing for each plan varies depending on the MAU. The JaaS Dev plan allows up to 25 MAU free, and only add-ons are charged extra. Under the JaaS Basic plan, pricing is $99 per month for 300 MAUs. The pricing for JaaS Standard, JaaS Business, and plans with more than 3000 MAU are mentioned in the main article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Zoom&lt;/strong&gt;&lt;br&gt;
Zoom offers two SDKs: a Video SDK(charged on the basis of usage) and a Meeting SDK (charged on per user basis).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Zoom Meeting SDK&lt;/em&gt; The Meeting SDK offers four paid tiers: Basic, Pro, Business, and Enterprise. To use the Meeting SDK, the host is the only person who must purchase and hold a license. This license carries a specific limit on the number of participants who can be supported in each tier.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Zoom Video SDK&lt;/em&gt; Under Zoom Video SDK, there are two pricing levels. With pay-as-you-go, you get 10,000 minutes per month, after which the plan is priced at $0.0035 per minute. At the second level, you pay $1000 per year with 30,000 minutes included per month. After the limit is crossed, you pay $0.003 per minute.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;100ms&lt;/strong&gt;&lt;br&gt;
100ms offers a single SDK offering both video conferencing and live streaming capabilities with straightforward pricing for both use cases.&lt;/p&gt;

&lt;p&gt;100ms offers 10,000 free minutes for conferencing and another 10,000 free minutes for streaming for each business every month in addition to 1000 free encoding minutes.&lt;/p&gt;

&lt;p&gt;Beyond this, video conferencing is charged at $0.004 per participant per minute, while audio-only calls are charged at $0.001 per participant per minute. Live Streaming costs $0.004 per broadcaster and $0.0012 per viewer per minute, while additional encoding minutes are charged at $0.04.&lt;/p&gt;

&lt;p&gt;While there are various parameters and aspects that PMs, CTOs, and decision-makers need to keep in mind, these are a few that we considered indispensable. We recommend that you have a look at the separate pieces focusing on each of these parameters. They offer research-based data from each vendor, as well as helpful links you can use to conduct in-depth research yourself.&lt;/p&gt;

&lt;p&gt;To know more about how 100ms can help fill in your video conferencing requirements, &lt;a href="https://meet.100ms.live/meetings/isha-deo/intro?__hstc=159648061.f079b4acf665d0fbf04f116fc64e1893.1655282117756.1663237861593.1663245279090.164&amp;amp;__hssc=159648061.1.1663245279090&amp;amp;__hsfp=69242381"&gt;book a call with us&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>video</category>
      <category>webdev</category>
      <category>help</category>
    </item>
    <item>
      <title>Introducing 100ms Starter Kits</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 06 Sep 2022 10:18:36 +0000</pubDate>
      <link>https://dev.to/100mslive/introducing-100ms-starter-kits-4pbg</link>
      <guid>https://dev.to/100mslive/introducing-100ms-starter-kits-4pbg</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zdkFIHqN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrkkcjmio8ylbygzlwi2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zdkFIHqN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrkkcjmio8ylbygzlwi2.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building live audio or video is hard.&lt;/p&gt;

&lt;p&gt;At 100ms, we’re working on simplifying that process because we believe that nearly all apps will have live audio/video in the future — a “live-first” digital world if you will.&lt;/p&gt;

&lt;p&gt;100ms is industry-agnostic, thus enabling product managers (PMs), developers, and engineers to shape and build real-time, life-like interactions the way they see fit for a multitude of functions. In fact, with the 100ms SDK, our customers have already built diverse use cases across industries such as dating, gaming, education, and the like — delivering millions of live audio/video minutes to their users.&lt;/p&gt;

&lt;p&gt;But, as modern users increasingly demand real-time interactive experiences online, the delta for delivering value has changed astronomically. To meet these demands closely, PMs and developers want to experience a feature before they build it out themselves. This is where the 100ms Starter Kits come in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4kXNQHUP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0pos7cmy92ugruqvg4y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4kXNQHUP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0pos7cmy92ugruqvg4y.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridging Imagination and Experience&lt;/strong&gt;&lt;br&gt;
After launching our &lt;a href="https://www.100ms.live/marketplace/virtual-events-starter-kit"&gt;Virtual Event Starter Kit&lt;/a&gt; in partnership with &lt;a href="https://twitter.com/vercel/status/1499402328813850624"&gt;Vercel&lt;/a&gt;, we realized that our customers loved the fact that we were able to instantly deliver a working demo of a Virtual Event Use Case. Since the starter kit is open source, anyone could take the code to extend that experience.&lt;/p&gt;

&lt;p&gt;By enabling this, we realized that users could instantly connect their imagination to what the app would actually look like.&lt;/p&gt;

&lt;p&gt;In other words, we were able to bridge the gap between imagination and experience.&lt;/p&gt;

&lt;p&gt;This is what inspired us to build the &lt;a href="https://www.100ms.live/marketplace"&gt;100ms Starter Kits&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are these Starter Kits?&lt;/strong&gt;&lt;br&gt;
These starter kits are proof-of-concept versions of use cases constructed on real-life interactions built with the 100ms SDK. We developed them with the hope that you’ll use them to actualize a feature/app you’ve been thinking about for a while, and go “This is what I wanted to build!”&lt;/p&gt;

&lt;p&gt;For example, let’s say you wanted to simulate the action of tapping a colleague’s shoulder to discuss something — while in an online meeting. These starter kits enable you to experience that exact feature by providing a one-click demo. You can also choose to deploy them and start experimenting.&lt;/p&gt;

&lt;p&gt;These starter kits are a quick jumping-off point to demonstrate a working, proof-of-concept version of whatever you are imagining. Moreover, these kits also serve as the building blocks for implementing your own ideas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zY4r5CwW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hgjzrutgpzzqljuxbeg9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zY4r5CwW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hgjzrutgpzzqljuxbeg9.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;What can you do with these Starter Kits?&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instant Demos:&lt;/strong&gt; Immediately experience our starter kits with the View Demo option in our &lt;a href="https://www.100ms.live/marketplace"&gt;Examples&lt;/a&gt; section. This allows you to measure the audio and video quality of our SDKs with the added context of User Interfaces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VaWSwfRA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h481lfa0cb73u0kzohrh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VaWSwfRA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h481lfa0cb73u0kzohrh.png" alt="Image description" width="880" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open Source Github Repos:&lt;/strong&gt; You can download the source code of these starter kits and break it down for reference code implementations. You can also experiment and build on these starter kits with your ideas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JlAYfntw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8t4bj0zqgrd2sd1w6fn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JlAYfntw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8t4bj0zqgrd2sd1w6fn5.png" alt="Image description" width="880" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy to Vercel:&lt;/strong&gt; Deploy these starter kits to your Vercel account and experiment with them in your own environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OAoCbhib--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ic8hihoneob86h5m1qs7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OAoCbhib--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ic8hihoneob86h5m1qs7.png" alt="Image description" width="880" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Breaking down a Starter Kit&lt;/strong&gt;&lt;br&gt;
Our starter kits are open-sourced apps wrapped as frontend layers around our &lt;a href="https://www.100ms.live/blog/roles-on-100ms"&gt;template policy&lt;/a&gt; (business logic around roles &amp;amp; permissions) and the 100ms SDK.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XjKI8T3u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebostrn42nukh2thy2xj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XjKI8T3u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ebostrn42nukh2thy2xj.png" alt="Image description" width="880" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To break this down further, let’s use an example.&lt;/p&gt;

&lt;p&gt;Let’s say you want to build a one-click engagement solution between coworkers — something as simple as tapping a colleague’s shoulder to talk to them, but in a virtual environment (like a Slack Huddle).&lt;/p&gt;

&lt;p&gt;Before this launch, you could have created an audio room template to implement this action. But while you would’ve had an audio-first conversation with it, a holistic experience isn’t being realized. Now, with the starter kit, you can build on this experience, add an interactive one-click button to the app, and get much closer to a product that replicates real-world, in-person communication.&lt;/p&gt;

&lt;p&gt;In the above example, the audio room is the template (use case), and the frontend UI is wrapped around it, allowing for the solution to show up as a one-click engagement. Combined with the 100ms SDK, the entire package comprises a single Starter Kit app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Five Starter Kits You Can Try Out&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As of now, we have rolled out five starter kits with various use-cases with 100ms Examples. Some of them have actually been developed with the help of our amazing &lt;a href="https://discord.com/invite/kGdmszyzq2"&gt;community&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We look forward to adding more of these kits in the near future, so as to enable more relatable and delightful quick-start experiences. But for now, these are our initial rollouts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Video Conference Starter Kit&lt;/strong&gt;&lt;br&gt;
Offer your customers engaging live conference experiences with excellent audio/video quality via 100ms’ Video Conferencing Starter Kit. This is a full-fledged feature-rich starter kit for building any audio/video conferencing product.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7apJedN6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/upubenk4amkfgdrrlsor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7apJedN6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/upubenk4amkfgdrrlsor.png" alt="Image description" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Virtual Event Starter Kit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Own your live event experience with this — a virtual events starter kit with real-time audio-video interactions that you can configure on the go. With this starter kit, you can host a live event or a live workshop with 10,000 viewers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NY7KRY3s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjbgh1408le9865ln30h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NY7KRY3s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjbgh1408le9865ln30h.png" alt="Image description" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Slack Huddle Clone&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Help users get on impromptu, lightweight audio calls for a quick, real-time conversation with the 100ms Slack Huddle Clone kit. With this starter kit, you can build the “quick tap conversation” use case discussed above.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YLgFFBs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7y096ysel01htblk4sav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YLgFFBs3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7y096ysel01htblk4sav.png" alt="Image description" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Discord Clone&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Offer your users an excellent audio conferencing/streaming experience with in-built advanced interactivity. Use our Discord Starter Kit to host or build a community discussion experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bV0BeRMF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ijrke8qokvtpzc2a742l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bV0BeRMF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ijrke8qokvtpzc2a742l.png" alt="Image description" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Audio Room&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Build audio-first apps with this starter kit and engage your users with experiences like live audio calling, podcast streaming, clubhouse-like audio rooms, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aj6TopTr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oc1mk4d3m15n8dbz44jg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aj6TopTr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oc1mk4d3m15n8dbz44jg.png" alt="Image description" width="880" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Closing Notes&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
With the launch of Starter Kits, we aim to unveil a new dimension of experimentation for our users. So head to the Examples section from the navbar and start building your own live experience. Don’t forget to share your apps with us, because we can’t wait to see what you build!&lt;/p&gt;

&lt;p&gt;If you are looking to contribute to these Starter Kits or partner with us, reach out to us in our &lt;a href="https://discord.com/invite/kGdmszyzq2"&gt;Discord&lt;/a&gt; community!&lt;/p&gt;

</description>
      <category>developer</category>
      <category>beginners</category>
      <category>webdev</category>
      <category>ios</category>
    </item>
    <item>
      <title>Server-side Considerations for your WebRTC Infrastructure</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Fri, 19 Aug 2022 06:04:00 +0000</pubDate>
      <link>https://dev.to/100mslive/server-side-considerations-for-your-webrtc-infrastructure-4in6</link>
      <guid>https://dev.to/100mslive/server-side-considerations-for-your-webrtc-infrastructure-4in6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ixaUb0CX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qik3wb5qg0h8909mjgex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ixaUb0CX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qik3wb5qg0h8909mjgex.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Internet users today expect flawless online experiences, be it browsing Instagram, streaming BTS, or debating anime fandoms. This expectation extends to online video communication.&lt;/p&gt;

&lt;p&gt;If they are to meet the expectations of contemporary users, video conferences must offer sub-second latency and high-quality audio/video transmission. Usually, developers choose WebRTC to build video experiences of this caliber.&lt;/p&gt;

&lt;p&gt;You might’ve read or heard about how WebRTC is a client-oriented protocol that usually doesn’t require any server to function. However, that’s not the whole story.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Server in WebRTC&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It is true that some WebRTC calls are possible without any need for an external server. But, even in those cases, a signaling server is required to establish the connection.&lt;/p&gt;

&lt;p&gt;Most calls would simply fail on a direct connection and even if they do connect, issues start arising as more peers join in — something we will discuss later in the article. To solve these issues, it is recommended that you use different servers that provide workarounds and call performance optimization in WebRTC ecosystems.&lt;/p&gt;

&lt;p&gt;This article will discuss a few elements on the server side you must consider when building a WebRTC solution. We will talk about the servers, multi-peer WebRTC architecture, and how it all works — so that you make the right architecture choices for your WebRTC application.&lt;/p&gt;

&lt;p&gt;The following is the list of servers explored in this piece. Some of these servers are mandatory while the presence of others will depend on the architecture you choose to work with:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Signaling Server (Mandatory)&lt;/li&gt;
&lt;li&gt;STUN/TURN Servers (Mesh Architecture)&lt;/li&gt;
&lt;li&gt;WebRTC Media Servers
a. MCU Server (Mixing Architecture)
b. SFU Server (Routing Architecture)
c. SFU Relay (Routing Architecture)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Signaling Server&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Signaling refers to the exchange of information between peers in a network. It is required to set up, control, and terminate a WebRTC call. WebRTC doesn’t specify a rigid way for signaling peers, which makes it possible for developers to manage signaling as they see fit. To implement this out-of-band signaling, a dedicated signaling server is often used.&lt;/p&gt;

&lt;p&gt;The signaling server is mainly used to initiate a call. Once that is done, WebRTC will take over. However, this does not mean that we won’t require the signaling server once the call has started.&lt;/p&gt;

&lt;p&gt;Even though most state changes like voice mute/unmute can be notified to the other peer(s) through WebRTC data channels, the signaling server has to be present throughout the call. It must be used to handle unusual scenarios like network disconnection where the peer requires signaling to reconnect to the call again.&lt;/p&gt;

&lt;p&gt;Now, we will take a look at the multi-peer architecture in WebRTC, before moving on to the servers we might need once signaling is complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;WebRTC multi-peer architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In WebRTC, there are multiple architectures that define how the peers are connected in a call. Generally, the server-side requirements depend on the architecture that you choose. Picking the right architecture for your use case helps identify the servers you will need.&lt;/p&gt;

&lt;p&gt;We will now take a look at the most popular WebRTC architectures:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Mesh architecture&lt;/li&gt;
&lt;li&gt;Mixing architecture&lt;/li&gt;
&lt;li&gt;Routing architecture&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below, we discuss these architectures along with the servers they need to function properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Mesh Architecture (STUN/TURN Servers)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this architecture, every peer is directly connected to every other peer in the call. For example, in a call with 4 peers, every peer has to send their video to 3 other peers and receive video from the same 3.&lt;/p&gt;

&lt;p&gt;This is generally suitable for WebRTC calls with a limited number of peers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NAT restrictions and Firewall:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A &lt;a href="https://askleo.com/how_does_nat_work/"&gt;NAT(Network Address Translation)&lt;/a&gt; router is used to map the private IP address to a public IP address under its network. When a peer is behind a NAT router, it only has knowledge of its private IP address (which is invalid outside its local network). Thus, it cannot exchange its actual public IP address during signaling phase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Additionally, when the peers are behind &lt;a href="https://doc-kurento.readthedocs.io/en/6.14.0/knowledge/nat.html#symmetric-nat"&gt;Symmetric NAT&lt;/a&gt;(a type of NAT) router, it becomes impossible for the peer to connect directly due to its unique mapping technique that returns different port addresses for different connections in the network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In some cases, the Firewall on a peer’s device might block a direct connection with another peer over the internet for security reasons.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a direct connection is not possible in this architecture, NAT traversal servers like the STUN and TURN server can be used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Session Traversal Utility for NAT (STUN)&lt;/strong&gt;&lt;br&gt;
A STUN server is used to retrieve the public IP address of a device behind NAT. This allows the device to communicate after learning its address on the internet. This is enough for roughly 80% of connections to be successful, but it cannot be used for cases where the peers are behind a Symmetric NAT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traversal Using Relay NAT (TURN)&lt;/strong&gt;&lt;br&gt;
A TURN server is used to relay media between peers when a direct connection is not possible — often due to a Symmetric NAT in the network or a firewall blocking connections.&lt;/p&gt;

&lt;p&gt;The TURN server is also known as the “relay” server and costs more than STUN to maintain because it relays media throughout a WebRTC connection. Since the TURN server is an extension of STUN, its implementations include a STUN server built into it by default.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For more details on STUN/TURN servers and how to use them in a simple WebRTC video app, have a look at Build your first WebRTC app with Python and React.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FoPyYA9Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zxr6e3jw4743uhxd19u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FoPyYA9Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zxr6e3jw4743uhxd19u.jpg" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages and Disadvantages of Mesh Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No need for a central media server as the connection is peer-to-peer. This reduces server costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Relatively simple to implement in WebRTC.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each participant has to send media to every other peer, which requires N-1 uplinks &amp;amp; N-1 downlinks.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Not much control over the media quality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exploring the Mixing and Routing architectures requires some familiarity with the idea of WebRTC Media Servers. So let’s start with that.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;WebRTC Media Servers&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the Mesh architecture, bandwidth expenditure becomes quite high for peers when the number of people in the call exceeds 4. The resource consumption tends to skyrocket, overheating the peer devices to the point that they can malfunction or even crash.&lt;/p&gt;

&lt;p&gt;Therefore, for use cases with more than 4 people in a call, it is recommended that you choose an architecture based around a media server.&lt;/p&gt;

&lt;p&gt;WebRTC Media Servers are central servers that peers send their media to, and receive processed media from. They act as “multimedia middleware” and can be used to offer several benefits. But, trying to implement one from scratch isn’t exactly a walk in the park.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Even if we use a media server, we must ensure that they’re available to the peers via TCP with the help of a TURN server sometimes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A well-implemented media server is highly optimized for performance and can offer numerous capabilities outside its main requirement. Here are some useful features an ideal media server should have:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simulcast&lt;/strong&gt;&lt;br&gt;
Video is served to the peers at different bitrates based on their configuration or network conditions. The peers send their video at multiple resolutions and bitrates to the media server and it chooses which version to send to each peer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recording&lt;/strong&gt;&lt;br&gt;
Call recording is made possible either by directly forwarding all incoming media from the server to storage or by connecting a custom peer to the server that receives all media streams and stores them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transcoding&lt;/strong&gt;&lt;br&gt;
Not all peers connected to the call might support the same audio/video codec. The media server should be able to resolve this issue by transcoding the audio/video to an appropriate codec supported by all, before sending the streams out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audio/Video Optimisations&lt;/strong&gt;&lt;br&gt;
Customized audio/video optimization should be possible. The server should be able to send only the media of active speakers to reduce bandwidth consumption, selectively mute audio from someone, or prioritize screen-share media over other videos.&lt;/p&gt;

&lt;p&gt;Here are some of the widely used media servers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- MCU (Multipoint Control Unit)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;- SFU (Selective Forwarding Unit)&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;- SFU Relay (Distributed SFU)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s discuss the architectures (Mixing and Routing) that use these media servers to solve issues commonly faced in the Mesh architecture — high bandwidth expenditure and heavy resource consumption.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Mixing Architecture (using the MCU Server)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this setup, all peers send their media to a central media server. Then, the media server operates on the media gathered, packs it into a single stream, and sends it to all peers. Here, every peer sends a single media stream to and receives one media stream from the server. The media server used here is called Multipoint Control Unit (MCU).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multipoint Control Unit (MCU)&lt;/strong&gt;&lt;br&gt;
An MCU server receives media from all peers and reworks it, performing the following functions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Decoding:&lt;/strong&gt; Upon gathering the primary media streams from all peers, the MCU decodes them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Rescaling:&lt;/strong&gt; The decoded videos are rescaled based on the peer’s network conditions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Composing:&lt;/strong&gt; The rescaled videos are combined into a single video stream in a layout requested by that peer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Encoding:&lt;/strong&gt; Finally, the video stream is encoded for delivery to the peer.&lt;/p&gt;

&lt;p&gt;This process is done parallelly for every single, separate peer in the call. This makes it easy for peers to send and receive media as a single stream without spending too much bandwidth.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--02mUL7pd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fkcesf0qedgq0r47vgo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--02mUL7pd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fkcesf0qedgq0r47vgo.jpg" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advantages and Disadvantages of Mixing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The server sends a single media stream to the peer, which makes it possible for devices with lower processing power to participate in the call.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Requires very little resource or bandwidth due to the peer having just a single uplink and downlink.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Server-side recording is possible.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The server requires high processing power and is generally costly to maintain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Peers may experience delays in receiving media packets, as they have to be processed before sending from the server.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While this sounds like a great option, the number of peers in a call directly depends on the performance of the MCU in the Mixing architecture. In reality, it is hard to maintain a WebRTC call with more than 30 peers in Mixing architecture, without the MCU heavily draining server resources.&lt;/p&gt;

&lt;p&gt;Despite this, Mixing remained the most widely used WebRTC architecture until a few years back. However, it has been slowly replaced by the Routing architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Routing Architecture (SFU Server)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this setup, all peers send their media to a central media server. The server forwards the media streams to all other peers separately, without operating on them in any way. Here, every peer sends a single media stream to the server and receives N-1 media streams (where N is the number of peers present in the call) from the server. The media server used here is called the Selective Forwarding Unit (SFU).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selective Forwarding Unit (SFU)&lt;/strong&gt;&lt;br&gt;
An SFU server receives media from all peers in a call. Then, all that media is routed “as is” to every other peer connected to the server. The peers can send more than one media stream to the server, making simulcast possible. The SFU can also be customized to automatically decide which media stream to send to a specific peer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W6xke-8E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fndctqxcjnavhhy6xvqf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W6xke-8E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fndctqxcjnavhhy6xvqf.jpg" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SFU Relay (Distributed SFU)&lt;/strong&gt;&lt;br&gt;
This is a fairly recent development in the domain. SFU Relay servers are simply SFU servers that can communicate with each other. One SFU server can relay media to another, creating a distributed SFU structure. This reduces the load on a single SFU and makes the whole network more scalable. In theory, any server can connect to the SFU relay via an API and receive the routed media.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advantages and Disadvantages of Routing&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Less demanding on server resources compared to options like MCU.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Works with asymmetric bandwidth (lower upload rate than download rate) for a peer, as there is only a single uplink with N-1 downlinks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Simulcast is supported for different resolutions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Server-side recording is not possible. But it is possible to route media to a peer that records the streams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The peer device must be good enough to handle multiple downlinks, unlike in Mixing architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It requires complex design and implementation on the server side.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Closing Notes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To wrap up, let’s quickly summarise the architectures and corresponding use cases discussed above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Mesh Architecture with a STUN/TURN server is ideal for calls with 4 or fewer peers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Mixing Architecture with an MCU server is good for calls with more than 4 peers. It is highly used in cases where support for legacy devices is a necessity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Routing Architecture with an SFU server is the modern approach to WebRTC video conferencing. As of now, this is the ideal approach to connecting peers in a call when their number exceeds the limits of the Mesh architecture.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is also possible to dynamically switch between different architectures based on the call size, so as to find a balance between app performance and server costs.&lt;/p&gt;

&lt;p&gt;Choosing and implementing the appropriate WebRTC server and architecture is just one side of the coin. Much more is required to make your WebRTC service reliable — optimizing performance, reducing call failure rate, and handling edge cases.&lt;/p&gt;

&lt;p&gt;If you’re planning on writing your own media server, you should also be aware of some basic problems that often show up in the process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Managing peers with a bad network connection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Helping peers that cannot support all the mixed codecs running in the call.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handling peer reconnections as well as new peers joining in and existing peers leaving the call.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Algorithms to perform bandwidth estimation for a peer so that the server does not send more data than it can handle.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, even if you choose the right server-side architecture, you will still have a lot to deal with regarding the technicalities of WebRTC.&lt;/p&gt;

&lt;p&gt;If you don’t want to deal with the intricacies of WebRTC but still want to host calls with a gold standard video solution (be video conferencing or streaming), you have options like &lt;a href="https://www.100ms.live/"&gt;100ms&lt;/a&gt; to do the heavy lifting for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;WebRTC with 100ms&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;100ms’ live video SDKs allow you to add live video capabilities to your application with just a few lines of code. With multiple highly-relevant features and a predictable &lt;a href="https://www.100ms.live/pricing"&gt;pricing plan&lt;/a&gt;, you don’t have to worry about dealing with exorbitant server costs for your app.&lt;/p&gt;

&lt;p&gt;If your application requires high-quality video capabilities but you’re unsure about building it from scratch, or you just don’t want to deal with fine-tuning the nitty-gritty of WebRTC, 100ms is your best bet.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Further Reading&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.100ms.live/blog/python-react-webrtc-app"&gt;Build your first WebRTC app with Python and React&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.100ms.live/blog/google-classroom-clone-react-100ms"&gt;Building a Google classroom clone with React and 100ms SDK&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.100ms.live/blog/building-slack-huddle-clone"&gt;Building Slack huddle clone&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.100ms.live/blog/video-chat-app-with-vuejs-and-golang"&gt;Building Video Chat App with VueJs and Golang&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>webrtc</category>
      <category>devops</category>
      <category>developer</category>
    </item>
    <item>
      <title>A New Approach to Live Streaming</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 16 Aug 2022 10:41:28 +0000</pubDate>
      <link>https://dev.to/100mslive/a-new-approach-to-live-streaming-1dnd</link>
      <guid>https://dev.to/100mslive/a-new-approach-to-live-streaming-1dnd</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3HJH6pTb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ugo3e5zp88o6lbm9xa9p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3HJH6pTb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ugo3e5zp88o6lbm9xa9p.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The long arc of tech progress has shown that user behavior and technology evolve together: changes in user behavior inspire technology, and then technology drives more of that behavior. The world of live streaming is going through one such change, where new user behavior is driving us to reevaluate the live streaming tech stack.&lt;/p&gt;

&lt;p&gt;Since the first live stream in 1995 where the Yankees were playing the Mariners, live streaming has now become an important medium for users on the Internet to learn, play, shop, and work. Who gets to stream and how they interact with their audience is changing rapidly, and this change is informing our approach to building infrastructure for live streaming.&lt;/p&gt;

&lt;h2&gt;
  
  
  The present-day live streaming tech stack
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rgOYOhZ5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zpcy0gmixz3lzwsizihi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rgOYOhZ5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zpcy0gmixz3lzwsizihi.png" alt="Image description" width="880" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most live streaming apps today are built by combining &lt;a href="https://en.wikipedia.org/wiki/Real-Time_Messaging_Protocol"&gt;RTMP&lt;/a&gt; encoded media streams at the streamer’s end and &lt;a href="https://en.wikipedia.org/wiki/HTTP_Live_Streaming"&gt;HLS&lt;/a&gt; streams at the viewer’s end. An industry of media servers in the middle exists to transcode the input format into the output stream.&lt;/p&gt;

&lt;p&gt;RTMP is a mature protocol that was originally built to support Adobe Flash. Given its maturity, RTMP is widely supported by encoding software and hardware, which can ingest raw device streams and output RTMP streams. RTMP is also fast: it optimizes for reduced latency.&lt;/p&gt;

&lt;p&gt;RTMP used to work well on the viewer’s end too, given that it was the preferred streaming protocol for Adobe Flash. However, as Flash usage went down and HTML5 emerged, HLS became a better fit for viewers. HLS is built over HTTP and is widely supported across all mobile and desktop devices.&lt;/p&gt;

&lt;p&gt;The combination of RTMP and HLS has worked well given the asymmetry in live streaming personas: there are many more viewers who require frictionless viewing and there are only a few streamers who need to configure specialized encoding software (like OBS).&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;So what’s changing?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;User behavior around live streaming is changing in 3 big ways: democratization, interactivity, and creator collaboration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b0eiaRW8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lbqis3kjvg2os70nmgfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b0eiaRW8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lbqis3kjvg2os70nmgfd.png" alt="Image description" width="880" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Democratization&lt;/strong&gt;&lt;br&gt;
Live streaming has been democratized and is no longer limited to professional streamers using sophisticated equipment connected to reliable broadband. Everyone is now streaming live with Instagram and YouTube, and from mobile devices connected to unreliable networks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interactivity&lt;/strong&gt;&lt;br&gt;
Live streams are no longer one-way broadcasts. Streamers and viewers are looking for ways to engage and interact with each other. Chat and emoji reactions running alongside live streams are now table-stakes.&lt;/p&gt;

&lt;p&gt;More recently, we have come across scenarios where viewers get “promoted” into becoming streamers. This enables new stream formats and increases the engagement between streamers and viewers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creator collaboration&lt;/strong&gt;&lt;br&gt;
Streamers are also experimenting with newer formats that involve collaborating with other streamers. As &lt;a href="https://ping.gg/"&gt;Ping Labs&lt;/a&gt; puts it, video calls have now become video content.&lt;/p&gt;

&lt;p&gt;The pandemic has accelerated these changes. Live streaming creation and viewership shot upwards, and that motivated more streams, more interactivity and more experimentation with stream formats.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What will happen to the live streaming tech?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mcQr3k6---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gut0egyefeu7xmb5oc99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mcQr3k6---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gut0egyefeu7xmb5oc99.png" alt="Image description" width="880" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Live streaming works well on the viewer’s end. &lt;a href="https://www.cloudflare.com/en-gb/learning/video/what-is-http-live-streaming/"&gt;HLS&lt;/a&gt;, and similar protocols like &lt;a href="https://www.cloudflare.com/learning/video/what-is-mpeg-dash/"&gt;MPEG-DASH&lt;/a&gt;, have democratized viewership by building on top of HTTP. Anyone with a web browser or a smartphone can view live streams today.&lt;/p&gt;

&lt;p&gt;It's time something similar happens on the streamer’s end. The solution to democratization, interactivity, and creator collaboration will be found in WebRTC becoming an alternative to RTMP in the live streaming tech stack.&lt;/p&gt;

&lt;p&gt;Given the times it was designed in, RTMP is unsuitable for streaming from mobile devices. It is built over TCP and assumes a fixed encoding bitrate. When the device runs into network disruptions, an RTMP encoder keeps producing output, which further chokes the network. WebRTC is a more modern protocol: it’s built over UDP and can adjust encoding bitrates based on network feedback.&lt;/p&gt;

&lt;p&gt;WebRTC is also more widely available. Any modern web browser today can encode WebRTC streams without requiring any additional software. Native apps on iOS and Android also support WebRTC well.&lt;/p&gt;

&lt;p&gt;WebRTC is also built for interactivity, given that it was originally a solution to real-time video conferencing. Chat and other forms of interactivity are easy to achieve on top of WebRTC. It is also possible to invite HLS viewers as WebRTC participants, which makes it suitable for advanced interactivity scenarios, where the viewer is promoted into becoming a streamer.&lt;/p&gt;

&lt;p&gt;Given its roots in conferencing, WebRTC also supports creator collaboration out of the box. Streamers can join in from different device platforms given that WebRTC is everywhere.&lt;/p&gt;

&lt;p&gt;The worlds of live video will merge&lt;br&gt;
Given the evolution in user behavior it is time these world began to merge. Streaming use-cases will leverage conferencing tech to introduce interactivity and other benefits. Conferencing will leverage streaming tech to scale video calls to many viewers in near real-time.&lt;/p&gt;

&lt;p&gt;Get access&lt;br&gt;
We are a team that has built live video products in companies like Disney+ and Facebook and are now applying that expertise to enable thousands of developers to build live video apps.&lt;/p&gt;

&lt;p&gt;We are excited by the creativity of our customers who are imagining new use-cases of live video every day. Developers and product managers are building experiences that mix the worlds of conferencing and streaming, and we are building infrastructure to enable them to do more with less.&lt;/p&gt;

&lt;p&gt;If this evolution in live video excites you and is relevant to your needs, try live streaming with 100ms. &lt;a href="https://www.100ms.live/"&gt;Sign up&lt;/a&gt; to get started and join our &lt;a href="https://100ms.live/discord"&gt;Discord community&lt;/a&gt; to connect with us. We look forward to seeing what you build.&lt;/p&gt;

</description>
      <category>livestreaming</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
    <item>
      <title>How 100ms tests for Network Reliability</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 09 Aug 2022 14:37:00 +0000</pubDate>
      <link>https://dev.to/100mslive/how-100ms-tests-for-network-reliability-309i</link>
      <guid>https://dev.to/100mslive/how-100ms-tests-for-network-reliability-309i</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lq8TsYUw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yg4v6q7zg95c4nj6brfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lq8TsYUw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yg4v6q7zg95c4nj6brfd.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From virtual classrooms to business meetings, shopping to dating apps, video is quickly becoming the de-facto communication mode online.&lt;/p&gt;

&lt;p&gt;Innovative developers and product thinkers are looking to create engaging live experiences in their applications. So naturally, it's critical that the audio-video SDK they build these experiences on top of provides a stable, extensible, and scalable bedrock.&lt;/p&gt;

&lt;p&gt;Among the many factors to consider before purchasing an audio/video SDK, network reliability stands out. After all, nobody enjoys running a twenty-minute monolog on a video call only to realize your network was down the entire time…&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Testing Network Reliability for Real-World Scenarios&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this article, we've downloaded, deployed, and tested the reliability of the &lt;a href="https://www.100ms.live/"&gt;100ms React SDK&lt;/a&gt;. To do so, we designed a series of tests that simulate common scenarios in real-life. Of course, since that's not fun enough, we decided to unleash our “full crazy” by battle testing each round against extreme conditions.&lt;/p&gt;

&lt;p&gt;The tests verify how the 100ms SDK fares across three parameters that define network reliability: low bandwidth, network blips &amp;amp; network switching.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Network Reliability Matters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the real world, individuals often have to deal with unstable or less-than-ideal network conditions. This happens when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;moving from one network area to another while traveling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;suddenly experiencing slow internet because of an expiring data pack&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;suddenly experiencing call disconnection for a few seconds due to issues in the larger infrastructure&lt;br&gt;
Network connectivity issues occur more often than we think. Video SDKs need to, at best, be resilient to these issues, and, at worst, provide developers with tools to deal with them gracefully.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment&lt;/strong&gt;&lt;br&gt;
100ms has a sample React app (100ms 2.0 Sample React App) meant to facilitate the testing of its SDK. We deployed it on &lt;a href="https://www.heroku.com/"&gt;Heroku&lt;/a&gt; and exposed it to a few commonly occurring end-user scenarios.&lt;/p&gt;



&lt;p&gt;&lt;a href="https://github.com/100mslive/100ms-web"&gt;https://github.com/100mslive/100ms-web&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;We had to generate some credentials from the 100ms console and then deployed this example React app on Heroku.&lt;/p&gt;

&lt;p&gt;The SDK was deployed and tested on the Chrome browser running on macOS Monterey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conditions and cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All these tests were 1:1 calls, performed with 2 people in the room. A few details about each test before we get into the results:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Low Bandwidth Test&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Network speed varies across devices. For instance, users operating on 4G mobile data often experience a volatile network, as it tends to vary in speed and stability. In this test, we checked how 100ms handles calls with varying connection speeds on low bandwidth.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Network Blip Test&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Network crises can happen in the middle of a call. In this test, we checked how 100ms handles the sudden loss of network connectivity followed by automatic reconnection.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Network Switching Test&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is common for users to switch between networks inadvertently. For example, they might be on a call while moving between state lines or from a city to the countryside, which may affect network strength.&lt;/p&gt;

&lt;p&gt;Network switching usually occurs when you move away from the range of one network to another or when you switch between your available networks for a higher speed. In this test, we checked how 100ms handles a network switch.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;1. Low Bandwidth Handling/Management Test&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Audio/Video applications need to handle usage across varying network bandwidths. In this section, we monitor how 100ms handles calls for users with low bandwidth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Methodology for the Low Bandwidth Test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We used Network Link Conditioner to emulate different network conditions. We set the ideal resolution to 640x360, and tested the app on 4 different configurations: 300 Kbps, 500 Kbps, 800 Kbps, and 1 Mbps, switching from one to another in the middle of a call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test Results&lt;/strong&gt;&lt;br&gt;
The 100ms SDK handles the drop in bandwidth by prioritizing audio/video upload for other peers instead of audio/video download.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If the network is adequate (800 Kbps), the video of active or recent speakers continues to be visible. The audio remains perfectly functional.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the network is poor, only peer audio is functional while their video degrades.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the device facing poor network conditions, the video is somewhat degraded but not entirely non-functional. At lower bandwidths (500Kbps and 300Kbps), audio quality remains functional for all other peers in the meeting and only sees a drop for the attendee experiencing bandwidth constraints.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wrMUfJFK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7295o98et182cw7rnhbw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wrMUfJFK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7295o98et182cw7rnhbw.png" alt="Image description" width="519" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/1q3Q1g-Ibkc"&gt;
&lt;/iframe&gt;
 &lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;2. Network Blip Test&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, we check how 100ms handles call connectivity when a user’s network connection gets switched off, or drops for several seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Methodology for the Network Blip Test&lt;/strong&gt;&lt;br&gt;
First, we check the call by switching off the internet connection for 10 seconds. This is done by toggling the connected wifi network from the menu bar and connecting back by re-toggling the same.&lt;/p&gt;

&lt;p&gt;Then, we iteratively repeat the same test for 20, 30, 45, and 60 seconds. While doing so, we observe the state of the call connection and how the app behaves during disconnection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test Results&lt;/strong&gt;&lt;br&gt;
The 100ms SDK reconnects every time when internet is disabled for 10, 20 and 30 seconds. When switched off for 45 and 60 seconds, the app tries to reconnect for 35s before disconnecting entirely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XT3SzeVx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xoc16312y4yrpet3px2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XT3SzeVx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xoc16312y4yrpet3px2y.png" alt="Image description" width="515" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/4fgVMhAcQLw"&gt;
&lt;/iframe&gt;
 &lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;3. Network Switching Test&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Apps are often exposed to different network conditions in the real world. In this case, we’ve tested how the 100ms SDK reacts when the app moves from one network strength to another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Methodology for the Network Switching Test&lt;/strong&gt;&lt;br&gt;
This test checks how 100ms handles connection when switching from one network to another. We tested the app in 3 Wi-Fi networks: &lt;br&gt;
2.5G and 5G from the same router, and a mobile hotspot.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To start the call, we connected to the Wifi 2.5G network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, we switched from Wifi 2.5G to Wifi 5G.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, we switched back to Wifi 2.5G.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then, we repeated the same process, switching to and from Wifi 2.5G and the Mobile Hotspot.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We waited for the call to reconnect during every network switch and monitored the time (in seconds) it took for the reconnection to occur.&lt;/p&gt;



&lt;p&gt;Some of the flawed behavior in the ‘Wifi 2.5G to Hotspot’ test section might be due to the unstable 4G network connection we experienced while testing.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Test Results&lt;/strong&gt;&lt;br&gt;
The 100ms SDK manages to reconnect after every network switch. Sometimes the video reconnects after the audio. The average reconnection time when switching within the same network is 9.1s for audio and 10s for video. The time for reconnection between 2 different networks is 19.2s for audio and 13.8s for video.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6k9v1rRf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hui78ksn66rc7upd4fo5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6k9v1rRf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hui78ksn66rc7upd4fo5.png" alt="Image description" width="516" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/0Dz8mRmhR5U"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Closing Notes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Given the centrality of reliability when it comes to choosing an audio-video SDK, we decided to lay all our cards on the table and reveal exactly how we fare in diverse network, bandwidth and end-user circumstances. Across all tests 100ms fared well under regular usage conditions. In some cases, like bandwidth drops, the SDK allows for graceful handling of degradation issues.&lt;/p&gt;

&lt;p&gt;Of course, as an SDK provider, we pride ourselves for making 100ms even more bullet-proof, so we can’t wait to elegantly solve across all these conditions and meet you again with even more aggressive scenarios.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>react</category>
      <category>testing</category>
    </item>
    <item>
      <title>HLS 101: What it is, How it works &amp; When to use it</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 09 Aug 2022 13:06:18 +0000</pubDate>
      <link>https://dev.to/100mslive/hls-101-what-it-is-how-it-works-when-to-use-it-4o1g</link>
      <guid>https://dev.to/100mslive/hls-101-what-it-is-how-it-works-when-to-use-it-4o1g</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IJ79JkZ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3eg1hps3d0ddvbk6f0ud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IJ79JkZ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3eg1hps3d0ddvbk6f0ud.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you consume online content (and if you are alive in 2022, you probably do), chances are that you’ve watched quite a few live streams. Be it online classes, sporting events, fitness lessons, or celebrity interactions, live streaming has quickly become the go-to source of learning and entertainment.&lt;/p&gt;

&lt;p&gt;Live streamers comprised over 1/3rd of all internet users in March and April 2020, with only 1 in 10 people in the US and UK streaming live content of their own. Just two years since, almost 82% of internet use is expected to be devoted to streaming video by 2022.&lt;/p&gt;

&lt;p&gt;A vast majority of live streaming applications are built on a protocol called HTTP Live Streaming, or HLS. In fact, if you’ve ever watched an Instagram live stream or tuned into the Super Bowl on the NBC Sports App, chances are, you’ve been touched by the magical hands of HLS.&lt;/p&gt;

&lt;p&gt;So if you are looking to build that kind of sophisticated live streaming experience inside your app, this article should give you a comprehensive understanding of the HLS protocol and everything in it.&lt;/p&gt;

&lt;p&gt;Read on to learn the basics of HLS, what it is, how it works, and why it matters for live streamers, broadcasters, and app developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is HLS?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;HLS stands for HTTP Live Streaming. It is a media streaming protocol designed to deliver audio-visual content to viewers over the internet. It facilitates content transportation from media servers to viewer screens — mobile, desktop, tablets, smart TVs, etc.&lt;/p&gt;

&lt;p&gt;Created by Apple, HLS is widely used for distributing live and on-demand media files. For anyone who wants to adaptively stream to Apple devices, HLS is the only option. In fact, if you have an App Store app that offers video content longer than 10 minutes or is heavier than 5MB, HLS is mandatory. You also have to provide one stream, at the very least, that is 64 Kbps or lower.&lt;/p&gt;

&lt;p&gt;Bear in mind, however, that even though HLS was developed by Apple, it is now the most preferred protocol for distributing video content across platforms, devices, and browsers. HLS enjoys broad support among most streaming and distribution platforms.&lt;/p&gt;

&lt;p&gt;HLS allows you to distribute content and ensure excellent viewing experiences across devices, playback platforms, and network conditions. It is the ideal protocol for streaming video to large audiences scattered across geographies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A little history&lt;/strong&gt;&lt;br&gt;
HLS was originally created by Apple to stream to iOS and Apple TV devices, Mac on OS X in Snow Leopard, and later OSes.&lt;/p&gt;

&lt;p&gt;In the early days of video streaming, Realtime Messaging Protocol (RTMP) was the de-facto standard video protocol for streaming video over the internet. However, with the emergence of HTML5 players that supported only HTTP-based protocols, RTMP became inadequate for streaming.&lt;/p&gt;

&lt;p&gt;With the rising dominance of mobile and IoT in the last decade, RTMP took a hit due to its inability to support native playback in these platforms. The Flash Player has to give away ground to HTML5, which resulted in a decline in Flash support across clients. This further contributes to RTMP’s unsuitability for modern video streaming.&lt;/p&gt;

&lt;p&gt;Read More: &lt;a href="https://www.100ms.live/blog/rtmp-vs-webrtc-vs-hls-live-streaming-protocols"&gt;RTMP vs WebRTC vs HLS: Battle of the Live Video Streaming Protocols&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In 2009, Apple developed HLS, designed to focus on the quality and reliability of video delivery. It was an ideal solution for streaming video to devices with HTML5 players. Its rising popularity also had much to do with its unique features, listed below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Adaptive Bitrate Streaming&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Embedded closed captions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fast forward and rewind&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Timed metadata&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dynamic Ad insertion&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Digital Rights Management(DRM)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How does HLS work?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;HLS has become the default way to play video on demand. Here’s how it works: HLS takes one big video and breaks it into smaller segments (video files) whose length varies, depending on what Apple recommends.&lt;/p&gt;

&lt;p&gt;Here’s an example:&lt;/p&gt;

&lt;p&gt;Let’s say there is a one-hour-long video, which has been broken into 10-second segments. You end up with 360 segments. Each segment is a video file ending with .ts. For the most part, they are numbered sequentially, so you end up with a directory that looks as seen below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XJNjk3Jd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n94gbqqsk7p23oatey0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XJNjk3Jd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n94gbqqsk7p23oatey0s.png" alt="Image description" width="707" height="137"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The video player downloads and plays each segment as the user is streaming the video. The size of the segments can be configured to be as low as a couple of seconds. This makes it possible to minimize latency for live buffering use cases. The video player also keeps a cache of these segments in case it loses network connection at some point.&lt;/p&gt;

&lt;p&gt;HLS also allows you to create each video segment at different resolutions/bitrates. Take the example above. In this, HLS lets you create:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D6KFa8jS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p29d0j8t9mxzlnd84gbg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D6KFa8jS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p29d0j8t9mxzlnd84gbg.png" alt="Image description" width="707" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s what the directory looks like now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZB2akarV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qzc2162yw6u0hl5ohanx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZB2akarV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qzc2162yw6u0hl5ohanx.png" alt="Image description" width="700" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once these segments are created at different bitrates, the video player can actually choose which segments to download and play, depending on the network strength and bandwidth available. That means if you are watching the stream at lower bandwidth, the player picks and plays video segments at 360p. If you have a stronger internet connection, you get the segments at 1080p.&lt;/p&gt;

&lt;p&gt;In the real world that means the video doesn’t get stuck, it just plays at different quality levels. This is called Adaptive Bitrate Streaming (ABR).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Adaptive Bitrate Streaming means in the real world&lt;/strong&gt;&lt;br&gt;
Imagine you’re streaming the Super Bowl live on your phone (because you just had to drive out of town that day). Just as the Rams are racing towards their winning touchdown, you hit a spot of questionable network in the Nevada desert.&lt;/p&gt;

&lt;p&gt;You’d think that means the livestream would basically stop working because your network strength has dropped. But, thanks to ABR, that wouldn’t be the case.&lt;/p&gt;

&lt;p&gt;Instead of ceasing to work, the steam would simply adjust itself to the current network. Let’s say you were watching the stream at 720p. Now, you’d get the same stream at 240p. That means, even though there is a drop in video quality, you would still be able to see Cooper Kupp take his MVP-winning touchdown. HLS would enable this automatically, simply by just adjusting to a lower quality broadcast to match your network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HLS Streaming Components&lt;/strong&gt;&lt;br&gt;
Three major components facilitate an HLS stream:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Media Server,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Content Delivery Network, and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Client-side Video Player&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x-DfAa2R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jc0wnt7qomjg5a3qxkro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x-DfAa2R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jc0wnt7qomjg5a3qxkro.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HLS Server (Media Server)&lt;/strong&gt;&lt;br&gt;
Once audio/video has been captured by input devices like cameras and microphones, it is encoded into a format that video players can translate and utilize: H.264 for video and AAC or MP3 for audio.&lt;/p&gt;

&lt;p&gt;The video is then sent to the HLS server (sometimes called the HLS streaming server) for processing. The server performs all the functions we’ve mentioned — segmenting video files, adapting segments for different bitrates, and packaging files into a certain sequence. It also creates index files that carry data about the segments and their playback sequence. This is information the video player will need to play the video content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content Delivery Network (CDN)&lt;/strong&gt;&lt;br&gt;
With the volume of video content to store, queue, and process, a single video server responding to requests from multiple devices would likely experience immense stress, slow down and possibly crash. This is prevented by using Content Deliver Networks (CDNs).&lt;/p&gt;

&lt;p&gt;A CDN is a network of interconnected servers placed across the world. The main criteria for distributing cached content (video segments in this case) is the closeness of the server to the end-user. Here’s how it works:&lt;/p&gt;

&lt;p&gt;A viewer presses the play button, and their device requests the content. The request is routed to the closest server in the CDN. If this is the first time that particular video segment has been requested, the CDN will push the request to the origin server where the original segments are stored. The origin server responds by sending the requested file to the CDN server.&lt;/p&gt;

&lt;p&gt;Now, the CDN server will not only send the requested file to the viewer but also cache a copy of it locally. When other viewers (or even the same one) request the same video, the request no longer goes to the origin server. The cached files are sent from the local CDN server.&lt;/p&gt;

&lt;p&gt;CDN servers are spread across the globe. This means requests for content do not have to travel countries and continents to the origin server every time someone wants to watch a show.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTML5 Player&lt;/strong&gt;&lt;br&gt;
To view the video files, end-users need an HTML5 player on a compatible device. Ever since Adobe Flash passed into the tech graveyard, HLS has become the default delivery protocol. Getting a compatible player won’t be a challenge since most browsers and devices support HLS by default.&lt;/p&gt;

&lt;p&gt;However, HLS does provide advanced features which some players may not support. For example, certain video players may not support captions, DRM, ad injection, thumbnail previews, and the like. If these features are important to you, make sure whichever player you choose supports them.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Resolving Latency Issues in HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Apple recommended batches of 10-second segments until 2016. That particular spec focused on loading three video segments before the player could start the video. However, with 10-second segments, content distributors suffered a 30-segment latency before playback could begin. Apple did eventually cut down the duration to 6 seconds, but that still left streamers and broadcasters with noticeable latency. Even since then, reducing segment size has been a popular way to drive down latency. By ‘tuning’ HLS with shorter chunks, you can accelerate download times, which speeds up the whole pipeline.&lt;/p&gt;

&lt;p&gt;In 2019 Apple released its own extension of HLS called Low Latency HLS (LL-HLS). This is often referred to as Apple Low Latency HLS (ALHLS). The new standard not only came with significantly lower latency but was also compatible with Apple devices. Naturally, this made LL-HLS a massive success and has been widely adopted across platforms and devices.&lt;/p&gt;

&lt;p&gt;LL-HLS comes with two major changes to its spec which are largely responsible for reducing latency. One is to divide the segments into parts and deliver them as soon as they’re available. The other is to ensure that the player has data about the upcoming segments even before they are loaded.&lt;/p&gt;

&lt;p&gt;A detailed breakdown of LL-HLS is beyond the scope of this article. However, you can find a structured deep-dive into the protocol in this Introduction to Low Latency Streaming with HLS.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;When to use HLS Streaming&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When delivering high-resolution videos with over 3MB: Without HLS, viewing such content usually leads to sub-par user experiences, especially when the user is on an average internet and/or mobile connection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When broadcasting live video from one to millions to reach the broadest audience possible: Not only is HLS supported by most browsers and operating systems, but it also offers ABR, which allows content to be viewed at different network speeds (Cellular, 3G, 4G, LTE, WIFI Low, WIFI High).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When reducing overall costs: HLS reduces CDN costs by delivering the video at optimal bitrate to viewers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When using advanced features in your stream: With HLS, you can leverage ad insertion, DRM, closed captions, adaptive bitrate, and much more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you expect your audience to use Apple devices: HLS enjoys support across devices, but Apple devices support HLS over MPEG-DASH and other alternatives. Additionally, apps on the App Store with over 10-minute videos are required to use HLS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When you are concerned with security: HLS power video on demand with encryption (DRM), which helps reduce and avoid piracy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Major Advantages of HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Transcoding for Adaptive Bitrate Streaming&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’ve already explained what ABR is in a previous section. Transcoding here refers to altering video content from one bitrate to another. In the above example, video segments are converted to 1080p, 720p, and 360p from a single, high-resolution stream.&lt;/p&gt;

&lt;p&gt;In the HLS workflow, the video travels from the origin server to a streaming server with a transcoder. The transcoder creates multiple versions of each video segment at different bitrates. The video player picks which version works best with the end-user’s internet and delivers the video accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delivery and Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With HLS, it is much easier to broadcast live video from one to millions. This is because most browsers and OSes support HLS.&lt;/p&gt;

&lt;p&gt;Since HLS can use web servers in a CDN to push media, the digital load is distributed among HTTP server networks. This makes it easy to cache audio-video chunks, which can be delivered to viewers across all locations. As long as they are close to a web server, they can receive video content.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Does 100ms support HLS for live streaming?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;100ms supports live streaming output via HLS. However, we use WebRTC as input for the stream, unlike services that offer the infrastructure for live streaming alone, which generally use RTMP.&lt;/p&gt;

&lt;p&gt;The 100ms live streaming stack combines streaming with conferencing, enabling our customers to build more engaging live streams that support multiple broadcasters, interactivity between broadcaster and viewer, and easy streaming from mobile devices. Since the entire audio/video SDK is packaged into one product, broadcasters can freely toggle between HLS streams and WebRTC, thus allowing two-way interaction while live streaming.&lt;/p&gt;

</description>
      <category>hls</category>
      <category>livestreaming</category>
      <category>developers</category>
      <category>videostreaming</category>
    </item>
    <item>
      <title>Introduction to Low Latency Streaming with HLS</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 09 Aug 2022 12:19:01 +0000</pubDate>
      <link>https://dev.to/100mslive/introduction-to-low-latency-streaming-with-hls-odh</link>
      <guid>https://dev.to/100mslive/introduction-to-low-latency-streaming-with-hls-odh</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O9MZ4813--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/382c2zhmwbnoiw09ahvm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O9MZ4813--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/382c2zhmwbnoiw09ahvm.png" alt="Image description" width="880" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whether it’s a World Cup match, the Super Bowl, or the French Open finals, watching it with your friends on a Saturday night is #goals. Sadly, not all of us can get tickets and travel across cities, countries, or continents to attend them. Thankfully, live streaming makes it possible to watch all the action, close to real-time.&lt;/p&gt;

&lt;p&gt;But, the only question is “how close to real-time are we talking?”&lt;/p&gt;

&lt;p&gt;Video streaming is largely facilitated on the back of a video protocol called HLS (HTTP Live Streaming). While the origins and fundamentals of HLS are explained in another piece on our blog, the current piece will focus on how HLS resolved one of its greatest shortcomings: latency.&lt;/p&gt;

&lt;p&gt;To start with, let’s take a quick peek at how HLS works.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Way of the HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We will first try to understand how HLS works, and makes live streaming possible. This is what the typical flow of an HLS streaming system looks like:&lt;/p&gt;

&lt;p&gt;The audio/video stream captured by input devices is encoded and ingested into a media server.&lt;/p&gt;

&lt;p&gt;The media server transcodes the stream into an HLS-compatible format with multiple ABR variants and also creates a playlist file to be used by the video players.&lt;/p&gt;

&lt;p&gt;Then, the media server serves the media and the playlist file to the clients, either directly or via CDNs by acting as an origin server.&lt;/p&gt;

&lt;p&gt;The players, on the client end, make use of the playlist file to navigate through the video segments. These segments are typically “slices” of the video being generated, with a definite duration (called segment size, usually 2 to 6 seconds).&lt;/p&gt;

&lt;p&gt;The playlist is refreshed based on segment size and players can select the segments specified in them, based on the order of playback and the video quality they require.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FcVNzuUN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dui54vj45au9uxy2t50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FcVNzuUN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5dui54vj45au9uxy2t50.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even though HLS offers a reliable way of video streaming, its high latency levels may pose obstacles and issues for many streamers or video distributors. According to the initial specification, a player should load the media files in advance before playing it. This makes HLS an inherently higher latency protocol with a latency of about 30 to 60 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Tuning HLS for Low Latency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Everyone was interested in implementing HLS but the high latency was a serious roadblock. So, devs and enthusiasts started to find workarounds to reduce latency and refine the protocol for effective usage. Some of these practices offered such positive results that they started becoming a silent standard along with the HLS specification. Two of these practices are listed below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reducing the default segment size&lt;/strong&gt;&lt;br&gt;
When Apple introduced HLS, the typical segment size was 10 seconds. Most HLS implementers found it too long because of which Apple decided to reduce it to 6 seconds. The overall latency can reduced by reducing segment size and the buffer size of the player.&lt;/p&gt;

&lt;p&gt;However, this carries some issues. Some of them include increased overall bitrate, buffering or jitter for devices with inferior network conditions. The ideal segment size should be decided based on the target audience and could be in the range of 2 to 4 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Media Ingest with faster protocols&lt;/strong&gt;&lt;br&gt;
The main reason HLS is used for live streaming is the scalability, reliability and player compatibility it provides across all platforms, especially when compared to other protocols. This has made HLS irreplaceable for video delivery so far.&lt;/p&gt;

&lt;p&gt;But the first mile contribution (also known as ingest) from the HLS stack can be replaced with lower latency protocols to reduce overall latency.&lt;/p&gt;

&lt;p&gt;The HLS ingest is usually replaced by RTMP ingest, which enjoys wide support for encoders/services and has proved to be a cost-effective solution. The stream ingested with RTMP is then transcoded to support HLS with the help of a media server before serving the content. Even though there have been experiments with other protocols such as WebRTC, SRT for the ingest part, RTMP remains the most popular option.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Evolution of HLS to LL-HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The latency in HLS started posing a significant hurdle, leading to less than stellar user experiences. This was becoming more frequent since HLS was being widely adopted around the world. Tuning HLS wasn’t enough and everyone was looking for better and more sustainable solutions.&lt;/p&gt;

&lt;p&gt;It was in 2016 that Twitter’s Periscope engineering team made some major changes to their implementation in order to achieve low latency with HLS. This proprietary version of HLS, often referred to as LHLS, offered latency of 2 to 5 seconds.&lt;/p&gt;

&lt;p&gt;DASH, the main competitor for HLS came up with a low latency solution based on chunked CMAF in 2017, following which a community-based low latency HLS solution (L-HLS) was drafted in the year 2018. This variant was heavily inspired from the Periscope’s LHLS and leveraged Chunked Transfer Encoding (CTE) to reduce latency. This variant is often referred to Community Low Latency HLS (CL-HLS).&lt;/p&gt;

&lt;p&gt;While this version of HLS was gaining popularity, Apple decided to release their own extension of the protocol called Low Latency HLS (LL-HLS) in 2019. This is often referred to as Apple Low Latency HLS (ALHLS). This version of HLS offered low latency comparable to the CL-HLS and promised compatibility with Apple devices. Since then, LL-HLS has been merged into the HLS specification and has technically become a single protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How LL-HLS reduces Latency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, we’ll explore the changes LL-HLS brings to HLS, making low latency streaming possible. This protocol came with 2 main changes in spec, responsible for its low latency nature. One is to divide the segments into parts and deliver them as soon as they’re available. The other is to inform the player about the data to be loaded next before said data is even available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dividing Segments into Parts&lt;/strong&gt;&lt;br&gt;
The video segments are further divided into parts (similar to chunks used in CMAF). These parts are just “smaller segments” with a definite duration — represented with EXT-X-PART tag in the media playlist.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ov0Hi5-s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ll2cr5r381bmg3ekqvqd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ov0Hi5-s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ll2cr5r381bmg3ekqvqd.png" alt="Image description" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The players can fill up their buffer more efficiently by publishing the parts while the segment is being generated. Reducing the buffer size on the player side using this approach, results in reduced latency. These parts are then collectively replaced with their respective segments upon completion, which will remain available for a longer period of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Preload Hints&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When LL-HLS was first introduced, it had HTTP/2 push specified as a requirement on the server side for sending new data to clients. Many commercial CDN providers were not supporting this feature at the time, which resulted in a lot of confusion.&lt;/p&gt;

&lt;p&gt;This issue was addressed by Apple in a subsequent update, replacing the HTTP/2 push with preload hints. They decided to include support for preload hints by adding a new tag EXT-X-PRELOAD-HINT to the playlist, reducing overhead.&lt;/p&gt;

&lt;p&gt;With the help of preload hint, a video player can anticipate the data to be loaded next and can send a request to URI from the hint to gain faster access to the next part/data. The servers should block all requests for the preload hint data and return them as soon as the data becomes available, thus reducing latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A look at the LL-HLS Media Playlist&lt;/strong&gt;&lt;br&gt;
Now, let’s take a look at how these tags are specified in the media playlist file, using an example. We will assume the segment size to be 6 seconds and the part size to be 200 milliseconds. We will also assume that 2 segments (segment A and B) have been completely played, while the 3rd segment (segment C) is still being generated. This segment is being published as a list of parts in the order of playback because it has not yet been completed.&lt;/p&gt;

&lt;p&gt;The following is a sample media playlist (M3U8 file).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kwjoXPJX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/81dhz1ap95iudc2kjyd0.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kwjoXPJX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/81dhz1ap95iudc2kjyd0.PNG" alt="Image description" width="707" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Players that don’t support LL-HLS yet tend to ignore tags like EXT-X-PART and EXT-X-PRELOAD-HINT, enabling them to treat the playlist with the traditional HLS and load segments at a higher latency.&lt;/p&gt;

&lt;h2&gt;
  
  
  **Low-Latency HLS on non-Apple devices
&lt;/h2&gt;

&lt;p&gt;**The new and improved HLS has a latency of about 3 seconds or less. The only reasonable competition for this protocol is LL-DASH. But Apple does not support DASH on all of its devices. This makes LL-HLS the only low latency live streaming protocol that has wide client-side support including Apple devices.&lt;/p&gt;

&lt;p&gt;One of the main advantages of using LL-HLS is its backward compatibility with legacy HLS players. The players that don’t support this variant may fall back to standard HLS and still work with higher latency. Since this protocol required players to start loading unfinished media segments instead of waiting until they become fully available, the changes in the spec made it difficult to adapt it quickly for all players.&lt;/p&gt;

&lt;p&gt;It took a while for most non-Apple devices to start supporting LL-HLS. Now, it is widely supported across almost all platforms with relatively newer versions of players. Even though some of them have been planning the support for the protocol since its inception, most of them are new and are improving their compatibility at the moment.&lt;/p&gt;

&lt;p&gt;Here are some popular players from different platforms that support LL-HLS in its entirety:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AVPlayer (iOS)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exoplayer (Android)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;THEOPlayer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;JWPlayer&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;HLS.js&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;VideoJS&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AgnoPlay&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Comparing LL-HLS, LL-DASH and WebRTC&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here, we compare three protocols LL-HLS, LL-DASH and WebRTC on six parameters: compatibility, delivery method, support for ABR, security, latency, best use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compatibility&lt;/strong&gt;&lt;br&gt;
LL-HLS provides good support for all Apple devices and browsers. It has been gaining support for most non-Apple devices.&lt;br&gt;
LL-DASH supports most non-Apple devices and browsers but is not supported on any Apple device.&lt;br&gt;
WebRTC is supported across all popular browsers and platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delivery Method&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, let’s go through a few relevant terms used with CMAF.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chunked Encoding (CE)&lt;/strong&gt; is a technique used for making publishable “chunks”. When added together, these chunks create a video segment. Chunks have a set duration and are the smallest unit that can be published.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chunked Transfer Encoding (CTE)&lt;/strong&gt; is a technique used to deliver the “chunks” as they are created in a sequential order. With CTE, one request for a segment is enough to receive all its chunks. The transmission ends once a zero-length chunk is sent. This method allows even small chunks to be used for transfer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;LL-HLS uses Chunked Encoding to create “parts” or “chunks” of a segment. But, instead of using Chunked Transfer Encoding, this protocol uses its own method of delivering chunks over TCP. The client has to make a request for every single part, instead of just requesting the whole segment and receiving it in parts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LL-DASH uses Chunked Encoding for creating chunks and Chunked Transfer Encoding for delivering them over TCP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WebRTC uses Real-time Transfer Protocol (RTP) for sending video and audio streams over UDP.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Support for Adaptive Bitrate (ABR)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Adaptive Bitrate (ABR) is a technique for dynamically adjusting the compression level and video quality of a stream to match bandwidth availability. It heavily impacts the video streaming experience for the viewer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;LL-HLS has support for ABR.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;LL-DASH has support for ABR.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;WebRTC doesn’t support ABR. But, a similar technique called Simulcast is used for dynamically adjusting video quality.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;br&gt;
Both LL-HLS and LL-DASH support media encryption and benefit from security features such as token authentication and digital rights management (DRM).&lt;/p&gt;

&lt;p&gt;WebRTC supports end-to-end encryption of media for transfer, user, file, and round-trip authentication. This is often sufficient for DRM purposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt;&lt;br&gt;
Both LL-HLS and LL-DASH have a latency of 2 to 5 seconds.&lt;/p&gt;

&lt;p&gt;WebRTC, on the other hand, has a sub-second latency of ~500 milliseconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Use Case&lt;/strong&gt;&lt;br&gt;
Both LL-HLS and LL-DASH are best suited for live streaming events that need to be delivered to millions of viewers. They are often used for streaming sporting events live.&lt;/p&gt;

&lt;p&gt;WebRTC is very frequently used for solutions such as video conferencing that require minimal latency and are not expected to scale to a big number.&lt;/p&gt;

&lt;p&gt;Now that the HLS supports low latency streaming, it is all set to conquer the video streaming space, ready to serve millions of fans watching their favourite team play a crucial match without any issues. Whether you want to start live streaming yourself or build an app that facilitates live streaming, LL-HLS remains your best friend.&lt;/p&gt;

</description>
      <category>livestreaming</category>
      <category>videostreaming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Roles on 100ms: Mapping real-world interactions to live video with a few clicks</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Tue, 09 Aug 2022 07:02:00 +0000</pubDate>
      <link>https://dev.to/100mslive/roles-on-100ms-mapping-real-world-interactions-to-live-video-with-a-few-clicks-461o</link>
      <guid>https://dev.to/100mslive/roles-on-100ms-mapping-real-world-interactions-to-live-video-with-a-few-clicks-461o</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AeW9YNva--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zd686pxudggywl8xj1zd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AeW9YNva--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zd686pxudggywl8xj1zd.jpg" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On this world's stage, we all have a role to play. Turns out, the same is true for the world of live, interactive video applications.&lt;/p&gt;

&lt;p&gt;If you're building a live video app, a meeting room will contain people who will need to perform different functions. For example, in a virtual classroom, the teacher is able to display their video and audio, as well as share their screen. Depending on the app, they can also display a student's screen to the rest of the class, allow the student to address the class, and more.&lt;/p&gt;

&lt;p&gt;All this while, the student may only be able to watch and listen to the teacher's video. They may only be able to share their screen or speak when the teacher allows them to, reducing interruptions or chaos in an ongoing class - especially if the class is large.&lt;/p&gt;

&lt;p&gt;However, building these permissions (what participants can and cannot do within meetings) are difficult because they usually require some coding and implementation effort to set up. The more nuanced and varied the permissions in an app, the more effort devs have to expend to ensure that the final app offers the exact features required by end-users.&lt;/p&gt;

&lt;p&gt;At 100ms, we call these permissions "roles". The teacher's role (in the above example) is to share audio/video, share their screen, allow students to ask questions or address the class, and more. The student's is to view the teacher's video, listen to their audio and perhaps share their screen, ask questions, and speak to the class - when the teacher enables them to.&lt;/p&gt;

&lt;p&gt;This article will delve into what these "roles" are, how they make your life easier, and how they are an improvement upon the industry standard "publish/subscribe" logic.&lt;/p&gt;

&lt;p&gt;But, let's start with the obvious question.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is a role?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before answering this question, we want to lay out 100ms' mission: bring the world closer by enabling real, life-like, live conversations virtually. We want our customers to be able to offer their users an online interactive experience that is as close to the real-world as possible.&lt;/p&gt;

&lt;p&gt;This is where roles comes in. They allow the easy recreation of real-life interactions on video, as this article will demonstrate with an example. In 100ms terminology, &lt;a href="https://www.100ms.live/docs/flutter/v2/foundation/templates-and-roles"&gt;a role is a collection of permissions&lt;/a&gt; that allow users to perform certain tasks while being part of the meeting room. Essentially, the role determines whether a user in the room has publish/subscribe permissions. It determines whether they can speak, mute other users, be muted, share their screen, etc.&lt;/p&gt;

&lt;p&gt;Before moving on, let us understand roles better by diving into the composing features:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Publishing rights:&lt;/strong&gt; The term "publish" here refers to a user's ability to share audio, video and if needed, their screen, when in a video call. The user's particular role decides whether they can share their audio/video/screen to the room.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subscribing rights:&lt;/strong&gt; The term "subscribe" here refers to a user's ability to view and listen to the video and audio being shared ("published") by others in the room. Depending on the use-case, they may only be able to subscribe to one person's (host) audio-video or to multiple users' streams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Permissions and Power:&lt;/strong&gt; Developers and specific users of a video app should be able to configure and manipulate roles (others' and their own). This would let them perform actions such as muting/unmuting others in the room, letting them share their screen or expelling them from the meeting altogether.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How Roles Work: An Example&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let’s take a closer look at roles through a simple example: a virtual events app for online concerts. The real-world experience this app is trying to replicate is denoted by the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jgvAEF8d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sv3sysu4od09mq7anfv1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jgvAEF8d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sv3sysu4od09mq7anfv1.jpeg" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Built with 100ms, this hypothetical app has three roles in place: “stage” (where the artist performs), “audience” (where the viewers watch the performance online) and “backstage” (where the person/people handling tech/logistics keep the show going as expected).&lt;/p&gt;

&lt;p&gt;And, these roles will have the following permissions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PeQimMAt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7i8ci8fv645srvoaz9t7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PeQimMAt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7i8ci8fv645srvoaz9t7.png" alt="Image description" width="880" height="605"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Those in the “stage” role can do the following: sing (publish their audio-video), invite audience members onto the stage to interact with. Those in the “audience” role can only view the artist’s stream, unless they have been invited to the “stage”. Those in the backstage can interact with others in the same role, kick out audience members if required, change lights in the artist’s video and of course, view the artist on “stage”.&lt;/p&gt;

&lt;p&gt;A real-world app would have more roles and each role would have more permissions. But this is what roles fundamentally do.&lt;/p&gt;

&lt;p&gt;Now, be it Meet, Twitch, or your cousin's school app, the concept of roles exist within all video SDKs. For example, these are the default roles we see in a typical webinar setup: a host (sometimes more than one depending on the app customer) and multiple participants. The host publishes their audio-video streams, the participants subscribe to the host, and sometimes other participants. The host can share their screen, and so can the participants but only if allowed by the host.&lt;/p&gt;

&lt;p&gt;However, most real-world scenarios need more than two roles to recreate real-world experiences. If you wanted more roles with varied permissions, you’d have to configure the same by coding it from scratch.&lt;/p&gt;

&lt;p&gt;So, what’s the solution? How do we make the process of creating varied roles easier?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The answer: custom roles on 100ms.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why are custom roles required?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As mentioned above, if you want to give users a nuanced, layered experience that matches their day-to-day life, two or three simple roles are not enough. Let’s take another example for this: a video consultation app for doctors and patients.&lt;/p&gt;

&lt;p&gt;In the real world, patients are greeted by a nurse/receptionist who takes their information, they wait in the waiting room, and when the doctor is ready, they are called into the consultation room. With roles limited to host and audience, you can’t do this virtually. You would have to create custom roles for “waiting room”, “nurse” and the like, which is usually time, effort and resource-intensive.&lt;/p&gt;

&lt;p&gt;However, using &lt;a href="https://www.100ms.live/"&gt;100ms&lt;/a&gt;, you can create custom roles with far greater ease. Owing to our built-in customizability and extensibility, you can actually build such a consultation app with zero lines of code. This is demonstrated step-by-step below.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Custom roles in action&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the aforementioned video consultation app, we can customize a user’s journey by modifying who they are publishing to and subscribing to, i.e. whose audio-video they can watch/listen to, and vice-versa.&lt;/p&gt;

&lt;p&gt;Here’s what the patient’s journey looks like: when a patient enters, they initially communicate with a nurse in a virtual “waiting room” where their information (name, gender, DOB, temperature, weight, symptoms) are noted by the nurse. After this, they wait until the doctor is able to communicate with them.&lt;/p&gt;

&lt;p&gt;In the language of roles, they will initially publish and subscribe to someone in the “nurse” role. Then, their role will be changed so that they publish and subscribe to someone in the “doctor” role.&lt;/p&gt;

&lt;p&gt;Using 100ms, devs can map out this exact journey without writing a single line of code—just a few clicks on the dashboard.&lt;/p&gt;

&lt;p&gt;To demonstrate roles in action, let’s put them to a test and visualize a user’s journey to the online “clinic”.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Using Roles to recreate a Patient’s Consultation Experience Online&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Go to the 100ms dashboard by creating a new account for free.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on Create New. There are multiple templates to choose from, depending on your use case. For a simple video conferencing app, select the ‘video conferencing’ template. But we’ll go with something else for our app.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K9YvjvBY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7u76ydprsvn66njg2wr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K9YvjvBY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7u76ydprsvn66njg2wr.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the ‘Create Custom App’ option.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZmvXOGFu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfo7h4g9thudf00jzfyn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZmvXOGFu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rfo7h4g9thudf00jzfyn.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This will let us access the ‘Create Roles’ option so that we can customize roles for our use case.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gOmH89tx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tc8kfw5dn3b1kwdep77r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gOmH89tx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tc8kfw5dn3b1kwdep77r.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For a clinic app, we require 4 roles: consultation-admin, consultation-area, reception-admin, reception-area (details of each role explained below). Click ‘Add a Role’ and name them accordingly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8faBELrF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7pdmwnqob5ux7hhoci0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8faBELrF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7pdmwnqob5ux7hhoci0.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As soon as a new person enters the clinic, they’ll be assigned the “reception-area” role. When the doctor is ready, the role will be changed to “consultation-area”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VocfNpLF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pt6jbq6d4ew1cjyi27vo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VocfNpLF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pt6jbq6d4ew1cjyi27vo.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, we have created the four roles we require: consultation-admin, consultation-area, reception-admin, and reception-area.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZCPmUeNg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hmysgmjzqm9sdioaib6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZCPmUeNg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hmysgmjzqm9sdioaib6.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Handling Permissions for each Role&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;100ms enables users to quickly and easily modify the permissions of different roles and subscription strategies, right from the dashboard.&lt;/p&gt;

&lt;p&gt;In the example, here’s what each role entails:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;reception-area:&lt;/strong&gt; The role assigned to the patient when they first enter the online clinic. This role can publish and subscribe only to the “reception-admin” role.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;reception-admin:&lt;/strong&gt; The role assigned to the “nurse” who greets the patient and takes their info. This role can be published and subscribed to the other three roles. They can also change roles from “reception-area” to “consultation-area”. If needed, they can remove the person in the “reception-area” role from the meeting room entirely.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;consultation-area:&lt;/strong&gt; The role assigned to the patient when the doctor is ready to see them. The nurse in the “reception-admin” role changes the patient’s role from “reception-area” to “consultation-area” when the doctor is ready.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;consultation-admin:&lt;/strong&gt; The role assigned to the doctor. This role can publish and subscribe to the “consultation-area” and “reception-admin” roles. If needed, they can expel the person in the “consultation-area” role completely.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On the 100ms dashboard, we can assign mute/unmute, screenshare, publish/subscribe permissions to each of these role with a couple of clicks. We can even give certain roles the ability to change other roles or every remove somebody from the meeting room entirely.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qf7jXlMB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j0w1w8ryq8r3tluogq11.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qf7jXlMB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j0w1w8ryq8r3tluogq11.jpeg" alt="Image description" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To go back to our example, we start by modifying the nurse role.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As mentioned, when a person walks into a clinic they will automatically be assigned the “reception-area” role. There, someone in the “reception-admin” role greets them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The reception-admin role should be subscribed to the person in the “reception-area” role, and also have permissions to modify the “reception-area” roles, or even remove them if necessary.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pe71DJ9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvj5o5jt53ig32ltbgck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pe71DJ9T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yvj5o5jt53ig32ltbgck.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A person in the “reception-area” role should be subscribed to the “reception-admin” role, but they will not have permission to modify their roles or add/remove them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sfGz8UR4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6ndzct15aklxstymrdj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sfGz8UR4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6ndzct15aklxstymrdj.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;This will now serve as a waiting room/reception area where the nurse connects with the incoming patients.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, we assign permissions to the “consultation-admin” role. Since the “consultation-admin” can call in a patient, they need to have administrative permissions to modify user roles, mute/unmute them, share their own screen, and the like.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y0PtOFxX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkjdqs5dg9c9o0s95i6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y0PtOFxX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wkjdqs5dg9c9o0s95i6v.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The “consultation-admin” will also be subscribed to the “consultation-area” role.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QG_xG_23--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgrums1g9400pn4bback.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QG_xG_23--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgrums1g9400pn4bback.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lastly, we modify the “consultation-area” role. They will be subscribed to the “consultation-admin” role to enable consultation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that done, we have successfully modified user permissions for our use case. Now, we implement the app using nothing but the power of roles and customization provided on the 100ms dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jGbvEwbB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ht8vx180jv7iht6dpfrv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jGbvEwbB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ht8vx180jv7iht6dpfrv.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;And just a note, we have been able to do all this without writing a single line of code!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The last step now is to pick a domain. Let’s go ahead with “hospital.app.100ms.live” as the subdomain and click on ‘Set up app’.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;100ms enables you to have a completely personalized subdomain for your app—for free. You can easily host these powerful video templates on your own domain URL with a click.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JbVSAhEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hhh5sae5eze2v310vkgw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JbVSAhEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hhh5sae5eze2v310vkgw.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The app is now ready to use.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Testing the App&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;With that done, we are ready to test out the app.&lt;/p&gt;

&lt;p&gt;Here’s what our newly set up telehealth app looks like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://youtu.be/webB4efxxyg&lt;/code&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TunobRvX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t58w1u9wtqw718zl81xr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TunobRvX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t58w1u9wtqw718zl81xr.png" alt="Image description" width="880" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s it! 🚀&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have successfully implemented a clinic-like experience digitally using nothing but roles on the 100ms dashboard.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is just the start.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With custom roles at your fingertips, the applications are limitless. Here are a few quick examples of virtual scenarios you can easily build with roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A digital classroom where a teacher will admit students when required. - The same waiting roles, as depicted above can be used there.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Online performances and events. It would be a simple task to create roles for “backstage”, “stage”, and “audience”—also exemplified above.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Online games: create roles for “dealer”, “player” and “spectator” for poker, for example.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Virtual interviews&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Celebrity fanmeets … and so much more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Roles are the Silver Bullet in your App-Building Arsenal&lt;br&gt;
As mentioned before, 100ms seeks to enable easier, more human communication by allowing customers to create interactive video apps that match our regular interactions as closely as possible. This is the whole point of the &lt;a href="https://www.100ms.live/marketplace"&gt;100ms Marketplace&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Our customers don’t have to worry about how to set up permissions for users of their apps. They don’t have to work on the basics: coding permissions, publish, and subscribe strategies for specific scenarios. They only have to imagine and conceptualize how user journeys will work, and using roles, developers can set them up with a few clicks—no coding involved.&lt;/p&gt;

&lt;p&gt;In fact, our customers have already achieved this. Have a look at how &lt;a href="https://www.100ms.live/blog/mingout-100ms-reimagine-online-dating"&gt;Mingout, a dating app used roles on 100ms to reimage first dates online&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the developers out there, &lt;a href="https://www.youtube.com/watch?v=W-92AslN-EI&amp;amp;t=462s"&gt;here is a closer, more dev-centric dive into how roles work&lt;/a&gt;, have a look at this video on Building a Clubhouse clone from scratch using React. It starts off by examining the 100ms SDK, and demonstrating how roles ease the process of app building.&lt;/p&gt;

&lt;p&gt;If you’re curious, try it out for yourself. &lt;a href="https://dashboard.100ms.live/register?__hstc=159648061.f079b4acf665d0fbf04f116fc64e1893.1655282117756.1659703352701.1660025193766.89&amp;amp;__hssc=159648061.2.1660025193766&amp;amp;__hsfp=1623975401"&gt;Get Started with 100ms for free&lt;/a&gt;, and play around with roles to bring an imagined app to life with just a few clicks!&lt;/p&gt;

</description>
      <category>videoconference</category>
      <category>developers</category>
      <category>livevideo</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Introduction to Low Latency Streaming with HLS</title>
      <dc:creator>Vrushti </dc:creator>
      <pubDate>Fri, 05 Aug 2022 10:28:08 +0000</pubDate>
      <link>https://dev.to/100mslive/introduction-to-low-latency-streaming-with-hls-27f1</link>
      <guid>https://dev.to/100mslive/introduction-to-low-latency-streaming-with-hls-27f1</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x72zb5o3yjpmxzf0k8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x72zb5o3yjpmxzf0k8i.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whether it’s a World Cup match, the Super Bowl, or the French Open finals, watching it with your friends on a Saturday night is #goals. Sadly, not all of us can get tickets and travel across cities, countries, or continents to attend them. Thankfully, live streaming makes it possible to watch all the action, close to real-time.&lt;/p&gt;

&lt;p&gt;But, the only question is “how close to real-time are we talking?”&lt;/p&gt;

&lt;p&gt;Video streaming is largely facilitated on the back of a video protocol called HLS (HTTP Live Streaming). While the origins and fundamentals of HLS are explained in another piece on our blog, the current piece will focus on how HLS resolved one of its greatest shortcomings: latency.&lt;/p&gt;

&lt;p&gt;To start with, let’s take a quick peek at how HLS works.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Way of the HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We will first try to understand how HLS works, and makes live streaming possible. This is what the typical flow of an HLS streaming system looks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The audio/video stream captured by input devices is encoded and ingested into a media server.&lt;/li&gt;
&lt;li&gt;The media server transcodes the stream into an HLS-compatible format with multiple ABR variants and also creates a playlist file to be used by the video players.&lt;/li&gt;
&lt;li&gt;Then, the media server serves the media and the playlist file to the clients, either directly or via CDNs by acting as an origin server.&lt;/li&gt;
&lt;li&gt;The players, on the client end, make use of the playlist file to navigate through the video segments. These segments are typically “slices” of the video being generated, with a definite duration (called segment size, usually 2 to 6 seconds).&lt;/li&gt;
&lt;li&gt;The playlist is refreshed based on segment size and players can select the segments specified in them, based on the order of playback and the video quality they require.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5fpbry0jf4oijrixnxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5fpbry0jf4oijrixnxf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even though &lt;strong&gt;HLS&lt;/strong&gt; offers a reliable way of video streaming, its high latency levels may pose obstacles and issues for many streamers or video distributors. According to the initial specification, a player should load the media files in advance before playing it. This makes HLS an inherently higher latency protocol with a latency of about 30 to 60 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Tuning HLS for Low Latency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Everyone was interested in implementing HLS but the high latency was a serious roadblock. So, devs and enthusiasts started to find workarounds to reduce latency and refine the protocol for effective usage. Some of these practices offered such positive results that they started becoming a silent standard along with the HLS specification. Two of these practices are listed below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reducing the default segment size&lt;/strong&gt;&lt;br&gt;
When Apple introduced HLS, the typical segment size was 10 seconds. Most HLS implementers found it too long because of which Apple decided to reduce it to 6 seconds. The overall latency can reduced by reducing segment size and the buffer size of the player.&lt;/p&gt;

&lt;p&gt;However, this carries some issues. Some of them include increased overall bitrate, buffering or jitter for devices with inferior network conditions. The ideal segment size should be decided based on the target audience and could be in the range of 2 to 4 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Media Ingest with faster protocols&lt;/strong&gt;&lt;br&gt;
The main reason HLS is used for live streaming is the scalability, reliability and player compatibility it provides across all platforms, especially when compared to other protocols. This has made HLS irreplaceable for video delivery so far.&lt;/p&gt;

&lt;p&gt;But the first mile contribution (also known as ingest) from the HLS stack can be replaced with lower latency protocols to reduce overall latency.&lt;/p&gt;

&lt;p&gt;The HLS ingest is usually replaced by RTMP ingest, which enjoys wide support for encoders/services and has proved to be a cost-effective solution. The stream ingested with RTMP is then transcoded to support HLS with the help of a media server before serving the content. Even though there have been experiments with other protocols such as WebRTC, SRT for the ingest part, RTMP remains the most popular option.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Evolution of HLS to LL-HLS&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The latency in HLS started posing a significant hurdle, leading to less than stellar user experiences. This was becoming more frequent since HLS was being widely adopted around the world. Tuning HLS wasn’t enough and everyone was looking for better and more sustainable solutions.&lt;/p&gt;

&lt;p&gt;It was in 2016 that Twitter’s Periscope engineering team made some major changes to their implementation in order to achieve low latency with HLS. This proprietary version of HLS, often referred to as LHLS, offered latency of 2 to 5 seconds.&lt;/p&gt;

&lt;p&gt;DASH, the main competitor for HLS came up with a low latency solution based on chunked CMAF in 2017, following which a community-based low latency HLS solution (L-HLS) was drafted in the year 2018. This variant was heavily inspired from the Periscope’s LHLS and leveraged Chunked Transfer Encoding (CTE) to reduce latency. This variant is often referred to Community Low Latency HLS (CL-HLS).&lt;/p&gt;

&lt;p&gt;While this version of HLS was gaining popularity, Apple decided to release their own extension of the protocol called Low Latency HLS (LL-HLS) in 2019. This is often referred to as Apple Low Latency HLS (ALHLS). This version of HLS offered low latency comparable to the CL-HLS and promised compatibility with Apple devices. Since then, LL-HLS has been merged into the HLS specification and has technically become a single protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How LL-HLS reduces Latency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, we’ll explore the changes LL-HLS brings to HLS, making low latency streaming possible. This protocol came with 2 main changes in spec, responsible for its low latency nature. One is to divide the segments into parts and deliver them as soon as they’re available. The other is to inform the player about the data to be loaded next before said data is even available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dividing Segments into Parts&lt;/strong&gt;&lt;br&gt;
The video segments are further divided into parts (similar to chunks used in CMAF). These parts are just “smaller segments” with a definite duration - represented with EXT-X-PART tag in the media playlist.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wstuih0kjpthsf0jh8e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wstuih0kjpthsf0jh8e.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The players can fill up their buffer more efficiently by publishing the parts while the segment is being generated. Reducing the buffer size on the player side using this approach, results in reduced latency. These parts are then collectively replaced with their respective segments upon completion, which will remain available for a longer period of time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Preload Hints&lt;/strong&gt;&lt;br&gt;
When LL-HLS was first introduced, it had HTTP/2 push specified as a requirement on the server side for sending new data to clients. Many commercial CDN providers were not supporting this feature at the time, which resulted in a lot of confusion.&lt;/p&gt;

&lt;p&gt;This issue was addressed by Apple in a subsequent update, replacing the HTTP/2 push with preload hints. They decided to include support for preload hints by adding a new tag EXT-X-PRELOAD-HINT to the playlist, reducing overhead.&lt;/p&gt;

&lt;p&gt;With the help of preload hint, a video player can anticipate the data to be loaded next and can send a request to URI from the hint to gain faster access to the next part/data. The servers should block all requests for the preload hint data and return them as soon as the data becomes available, thus reducing latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A look at the LL-HLS Media Playlist&lt;/strong&gt;&lt;br&gt;
Now, let’s take a look at how these tags are specified in the media playlist file, using an example. We will assume the segment size to be 6 seconds and the part size to be 200 milliseconds. We will also assume that 2 segments (segment A and B) have been completely played, while the 3rd segment (segment C) is still being generated. This segment is being published as a list of parts in the order of playback because it has not yet been completed.&lt;/p&gt;

&lt;p&gt;The following is a sample media playlist (M3U8 file).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0vf9ngm1v0tu6st5p0x.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0vf9ngm1v0tu6st5p0x.PNG" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Players that don’t support LL-HLS yet tend to ignore tags like EXT-X-PART and EXT-X-PRELOAD-HINT, enabling them to treat the playlist with the traditional HLS and load segments at a higher latency.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Low-Latency HLS on non-Apple devices&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The new and improved HLS has a latency of about 3 seconds or less. The only reasonable competition for this protocol is LL-DASH. But Apple does not support DASH on all of its devices. This makes LL-HLS the only low latency live streaming protocol that has wide client-side support including Apple devices.&lt;/p&gt;

&lt;p&gt;One of the main advantages of using LL-HLS is its backward compatibility with legacy HLS players. The players that don’t support this variant may fall back to standard HLS and still work with higher latency. Since this protocol required players to start loading unfinished media segments instead of waiting until they become fully available, the changes in the spec made it difficult to adapt it quickly for all players.&lt;/p&gt;

&lt;p&gt;It took a while for most non-Apple devices to start supporting LL-HLS. Now, it is widely supported across almost all platforms with relatively newer versions of players. Even though some of them have been planning the support for the protocol since its inception, most of them are new and are improving their compatibility at the moment.&lt;/p&gt;

&lt;p&gt;Here are some popular players from different platforms that support LL-HLS in its entirety:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AVPlayer (iOS)&lt;/li&gt;
&lt;li&gt;Exoplayer (Android)&lt;/li&gt;
&lt;li&gt;THEOPlayer&lt;/li&gt;
&lt;li&gt;JWPlayer&lt;/li&gt;
&lt;li&gt;HLS.js&lt;/li&gt;
&lt;li&gt;VideoJS&lt;/li&gt;
&lt;li&gt;AgnoPlay&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Comparing LL-HLS, LL-DASH and WebRTC&lt;/strong&gt;&lt;br&gt;
Here, we compare three protocols LL-HLS, LL-DASH and WebRTC on six parameters: compatibility, delivery method, support for ABR, security, latency, best use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compatibility&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LL-HLS provides good support for all Apple devices and browsers. It has been gaining support for most non-Apple devices.&lt;/li&gt;
&lt;li&gt;LL-DASH supports most non-Apple devices and browsers but is not supported on any Apple device.&lt;/li&gt;
&lt;li&gt;WebRTC is supported across all popular browsers and platforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Delivery Method&lt;/strong&gt;&lt;br&gt;
First, let’s go through a few relevant terms used with CMAF.&lt;/p&gt;

&lt;p&gt;Chunked Encoding (CE) is a technique used for making publishable “chunks”. When added together, these chunks create a video segment. Chunks have a set duration and are the smallest unit that can be published.&lt;/p&gt;

&lt;p&gt;Chunked Transfer Encoding (CTE) is a technique used to deliver the “chunks” as they are created in a sequential order. With CTE, one request for a segment is enough to receive all its chunks. The transmission ends once a zero-length chunk is sent. This method allows even small chunks to be used for transfer.&lt;/p&gt;

&lt;p&gt;-LL-HLS uses Chunked Encoding to create “parts” or “chunks” of a segment. But, instead of using Chunked Transfer Encoding, this protocol uses its own method of delivering chunks over TCP. The client has to make a request for every single part, instead of just requesting the whole segment and receiving it in parts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LL-DASH uses Chunked Encoding for creating chunks and Chunked Transfer Encoding for delivering them over TCP.&lt;/li&gt;
&lt;li&gt;WebRTC uses Real-time Transfer Protocol (RTP) for sending video and audio streams over UDP.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Support for Adaptive Bitrate (ABR)&lt;br&gt;
Adaptive Bitrate (ABR) is a technique for dynamically adjusting the compression level and video quality of a stream to match bandwidth availability. It heavily impacts the video streaming experience for the viewer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LL-HLS has support for ABR.&lt;/li&gt;
&lt;li&gt;LL-DASH has support for ABR.&lt;/li&gt;
&lt;li&gt;WebRTC doesn’t support ABR. But, a similar technique called Simulcast is used for dynamically adjusting video quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;br&gt;
Both LL-HLS and LL-DASH support media encryption and benefit from security features such as token authentication and digital rights management (DRM).&lt;/p&gt;

&lt;p&gt;WebRTC supports end-to-end encryption of media for transfer, user, file, and round-trip authentication. This is often sufficient for DRM purposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt;&lt;br&gt;
Both LL-HLS and LL-DASH have a latency of 2 to 5 seconds.&lt;/p&gt;

&lt;p&gt;WebRTC, on the other hand, has a sub second latency of ~500 milliseconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Use Case&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Both LL-HLS and LL-DASH are best suited for live streaming events that need to be delivered to millions of viewers. They are often used for streaming sporting events live.&lt;/p&gt;

&lt;p&gt;WebRTC is very frequently used for solutions such as video conferencing that require minimal latency and are not expected to scale to a big number.&lt;/p&gt;

&lt;p&gt;Now that the HLS supports low latency streaming, it is all set to conquer the video streaming space, ready to serve millions of fans watching their favourite team play a crucial match without any issues. Whether you want to start live streaming yourself or build an app that facilitates live streaming, LL-HLS remains your best friend.&lt;/p&gt;

</description>
      <category>hls</category>
      <category>beginners</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Agora vs Twilio vs Jitsi vs Zoom vs 100ms</title>
      <dc:creator>Pro100ms</dc:creator>
      <pubDate>Thu, 05 May 2022 15:52:27 +0000</pubDate>
      <link>https://dev.to/100mslive/agora-vs-twilio-vs-jitsi-vs-zoom-vs-100ms-3kai</link>
      <guid>https://dev.to/100mslive/agora-vs-twilio-vs-jitsi-vs-zoom-vs-100ms-3kai</guid>
      <description>&lt;p&gt;Unsurprisingly, the Covid-19 pandemic revealed exactly how useful audio-video conferencing software could be. Having to shelter in place and work remotely, people worldwide had to manage everything from doctor’s appointments, fitness classes, dating to Friday night drinks, and watching live concerts on video apps.&lt;/p&gt;

&lt;p&gt;Naturally, video conferencing software has seen massive growth in usage during the pandemic. There was a &lt;a href="https://www.bugsnag.com/covid-19-app-usage-error-data-report"&gt;627% increase in downloads of video chat and online conference apps&lt;/a&gt; in North America. A &lt;a href="https://www.bugsnag.com/covid-19-app-usage-error-data-report"&gt;121% increase in daily active users&lt;/a&gt; was also observed. The explosion of Zoom and Google Meet is common knowledge by now. Even &lt;strong&gt;Skype for Business, GoToMeeting, and JoinMe&lt;/strong&gt; apps saw &lt;a href="https://www.bugsnag.com/covid-19-app-usage-error-data-report"&gt;downloads increase by 66%, 85%, and 43% in March&lt;/a&gt; 2020.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;App Categories with Significant Growth&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As it stands, users expect certain apps to have in-built audio-video communicative capabilities. The numbers attest to this. Video conferencing &lt;a href="https://digitalintheround.com/video-conferencing-statistics/"&gt;improves communication for 99% of people&lt;/a&gt;. Video meetings &lt;a href="https://digitalintheround.com/video-conferencing-statistics/"&gt;boost productivity by 50%&lt;/a&gt;. By every metric, video conferencing capabilities improve user experience.&lt;/p&gt;

&lt;p&gt;However, embedding audio-video communication into an app from scratch requires time, effort, and investment. Instead, it's much easier to implement audio-video using an SDK designed for that purpose.&lt;/p&gt;

&lt;p&gt;Since there are multiple products and SDKs aimed at helping devs build comprehensive video conferencing into their software, choosing the right platform is more complicated than it seems.&lt;/p&gt;

&lt;p&gt;To help devs and product managers make a more informed decision, this article will break down the main features and limitations of five tools that provide the infrastructure for embedding in-app audio-video functionality: &lt;strong&gt;Agora, Twilio, Jitsi, Zoom,&lt;/strong&gt; and &lt;strong&gt;100ms&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This piece will compare tools on six parameters: &lt;strong&gt;ease of integration, error handling, scalability, cost of support, plugins for easy feature development, and pricing&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Agora: Real-Time Engagement for Apps
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Agora? What are its main offerings?
&lt;/h3&gt;

&lt;p&gt;Agora is an API-first SaaS company that started by providing real-time audio and video broadcast APIs. Now, it has expanded to a platform that allows customers to create rich, in-app audio-video features, such as real-time recording and messaging, embedded video, and video chat as well as interactive live video streaming.&lt;/p&gt;

&lt;p&gt;Main features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cross-platform SDKs that are highly customizable.&lt;/li&gt;
&lt;li&gt;Using low code UIKit libraries, users can embed a real-time video UI with a few lines of code.&lt;/li&gt;
&lt;li&gt;Agora uses its own Software Defined Real-time Network (SD-RTN™), a real-time transmission network. Unlike a traditional carrier network, the SD-RTN™ is not limited by device type, phone numbers, or a network provider’s coverage radius.&lt;/li&gt;
&lt;li&gt;Multiple extensions (interactive whiteboard, cloud recording, Agora analytics) enable the easy addition of new and useful features to an app.&lt;/li&gt;
&lt;li&gt;Offers official SDKs for React Native, Electron, Unity, Cocos, and Flutter.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Agora’s Key Functionality Metrics
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Ease of integration:&lt;/strong&gt; Agora offers both pre-built and custom SDKs. The pre-built version can be installed and activated in a few lines of code but is not customizable and extensible. It comes with two pre-defined permissions (roles) for peers within a call: host and participant.&lt;/p&gt;

&lt;p&gt;Developers have to handle low-level publish-subscribe abstractions. This adds overhead in handling network exceptions, bandwidth management, and writing role-based infrastructure. Developers will also have to manually configure the permissions for different roles in the call (teacher vs. student, instructor vs. learner, etc.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Error Handling:&lt;/strong&gt; Agora does not support in-built disconnection handling and edge cases on devices like app background handling, switching of microphones, etc. Devs must write extra code to set it up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Scalability:&lt;/strong&gt; Within a single call, Agora allows a maximum of 17 hosts and a total of 10,000 participants, including the hosts. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Cost of Support:&lt;/strong&gt; Agora’s free Starter plan offers Tickets/Email support, Online Documentation, and KB Access. Services like code review, guaranteed response times, live developer consultation, and training must be paid for (the lowest price being $1200/month and the highest being $4900/month).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Plugins for easy feature development&lt;/strong&gt;: Agora offers multiple plugins for the easier development of feature-rich apps. Users can access &lt;a href="https://www.agora.io/en/agora-extensions-marketplace/"&gt;Agora’s plugin marketplace&lt;/a&gt; for a large number of integrations with various functions. However, adding all plugins requires extra coding effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Pricing:&lt;/strong&gt; Starts at $4, but Agora’s pricing policy is quite layered and will require close examination before purchase.&lt;/p&gt;

&lt;h3&gt;
  
  
  Agora’s Limitations
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Pricing for Agora &lt;a href="https://www.100ms.live/hms-vs-agora"&gt;goes up exponentially as the number of video participants increases&lt;/a&gt;. Anytime the download quality on a call exceeds 720p (which can happen by screen-sharing between two peers), charges switch to HD prices. Agora’s pricing plans can also be complicated for users.&lt;/li&gt;
&lt;li&gt;You cannot have more than 17 hosts in a meeting.&lt;/li&gt;
&lt;li&gt;Integrating the SDK is quite complicated and time-consuming.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Twilio: Meaningful In-App Customer Engagement
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Twilio? What are its main offerings?
&lt;/h3&gt;

&lt;p&gt;Twilio originally offered an API for automating traditional phone calls and SMS text messages. Today, it provides programmable APIs to help developers build business communication (in-app and otherwise) across the customer journey. They allow devs to integrate audio and video interactions into multiple platforms.&lt;/p&gt;

&lt;p&gt;Once the Twilio API has been integrated, interactions can take the form of SMS, WhatsApp, Voice, Video, email, and even IoT.&lt;/p&gt;

&lt;p&gt;Main Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Communications APIs to implement messaging, voice chats, and video conversations within the software or outside its UI (SMS, WhatsApp messages, etc.)&lt;/li&gt;
&lt;li&gt;Programmable connectivity features (Chat, Voice API, Video) for generating virtual phone numbers, initiating SIP trunking, and messaging.&lt;/li&gt;
&lt;li&gt;Use case-based APIs that allow the abstraction required for tasks related to authentication, message control, and call routing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Twilio’s Key Functionality Metrics
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Ease of integration:&lt;/strong&gt; Twilio provides web, iOS, and Android SDKs. The SDK does not support other frameworks like Flutter and React Native. When using multiple audio and video inputs, devs must manually configure them, for which they have to write extra code.&lt;/p&gt;

&lt;p&gt;Much like Agora, devs have to handle low-level publish-subscribe abstractions to set up Twilio integrations. They must also manually define permissions for all actors within a call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Error Handling&lt;/strong&gt;: Manual configuration is required to build bandwidth management. Twilio offers extensive call insights to track and analyze errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Scalability&lt;/strong&gt;: Twilio supports a maximum of 50 hosts within a call and a maximum of 50 participants, including hosts. You can switch to Twilio Live for HLS streaming and accommodate unlimited participants in a call. But, separate SDK integration is required to stream video via HLS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Cost of Support:&lt;/strong&gt; Twilio’s free support plan offers API status notifications and email support during business hours. Users have to pay for more services like 24/7 live chat support, support escalation line, quarterly status review, and guaranteed response times. The price depends on the support plan, usually a percentage of the monthly plan or a certain minimum amount (lowest being $250/month and highest being $5000/month).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Plugins for easy feature development:&lt;/strong&gt; No plugins are available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Pricing:&lt;/strong&gt; Starts at $4, and has a relatively simple pricing policy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Twilio’s Limitations
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;SDKs are available only for web, iOS, and Android.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is not possible to stream video via RTMP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Devs must manually compose all recordings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Number of participants is limited to 50.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Jitsi: Open-Source Video Integration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Jitsi? What are its main offerings?
&lt;/h3&gt;

&lt;p&gt;Jitsi is a collection of open-source projects designed to help users build and implement secure video conferencing options. Among its offerings, &lt;strong&gt;Jitsi Meet&lt;/strong&gt; is best known for providing video conferencing services. Jitsi also comes with &lt;strong&gt;meet.jit.si&lt;/strong&gt; which hosts a free-for-use Jitsi Meet instance and the &lt;strong&gt;Jitsi Videobridge&lt;/strong&gt; which powers and sustains Jitsi’s multi-peer video features.&lt;/p&gt;

&lt;p&gt;Main Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jitsi is an open-source solution with impressive community support.&lt;/li&gt;
&lt;li&gt;Setup is relatively easy and comes with one-click installation.&lt;/li&gt;
&lt;li&gt;The process to set up video/audio calls and multi-user meeting rooms is quite user-friendly.&lt;/li&gt;
&lt;li&gt;Jitsi uses industry-standard physical, administrative, and technical shields to safeguard the confidentiality of users' personal data.&lt;/li&gt;
&lt;li&gt;Jitsi users have a decent choice of service providers across multiple geographies to choose from, in order to host the application locally.&lt;/li&gt;
&lt;li&gt;Jitsi supports all available clients (Windows, Linux, Mac, iOS, Android).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Jitsi’s Key Functionality Metrics
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Ease of integration&lt;/strong&gt;: Offers both pre-built and custom SDKs. Like Twilio, there are no predefined roles. Devs have to manually configure permissions for peers within a call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Error Handling:&lt;/strong&gt; Manual configuration is required to build bandwidth and connection management into Jitsi's low-level API. Jitsi also has SDKs for less customizable versions with some in-built connection management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Scalability:&lt;/strong&gt; With the open-source SDK*&lt;em&gt;,&lt;/em&gt;* Jitsi supports a maximum of 100 hosts in a call, and a maximum of 100 participants, including the hosts. The paid Jitsi SDK from 8x8 supports a maximum of 500 participants (including hosts) in a call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Cost of Support:&lt;/strong&gt; Users can ask questions to the Jitsi community. For paid support, they will have to approach &lt;a href="https://jaas.8x8.vc/#/"&gt;8x8&lt;/a&gt;, the company that acquired Jitsi in 2018.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Plugins for easy feature development:&lt;/strong&gt; Since Jitsi is fully DIY, plugins may not be available for most features. However, there are some open-source plugins devs can use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Pricing:&lt;/strong&gt; Largely free of charge for the open-source SDK. However, some costs will be involved for the deployment infrastructure. Users can expect that to be 40-50% of the cost of other paid providers.&lt;/p&gt;

&lt;p&gt;For the paid SDK from 8x8, prices are based on the number of active monthly active users. For example, JaaS (Jitsi as a Service) Dev supporting 25 active monthly users is free, JaaS Basics supporting 300 users is $99/month, JaaS Standard supporting 1500 users is $499/month, and so on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Jitsi’s Limitations
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Given its open-source nature, video apps with Jitsi have to be built from scratch. Therefore, it takes a significant amount of time to build a stable product.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Zoom: Quick and Easy Video Conferencing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Zoom? What are its main offerings?
&lt;/h3&gt;

&lt;p&gt;Zoom is a cloud-based video conferencing app that offers SDKs for devs to implement audio-video communications and relevant capabilities within new or existing apps. Zoom SDKs are split into the Meeting SDK and the Video SDK.&lt;/p&gt;

&lt;p&gt;Zoom SDKs allow for the setup and integration of numerous features such as live chat, webinars, screen sharing, changing background, and multiple collaborative functions. Zoom Meeting SDKs are a solid option for anyone looking to include a slew of video communication features into their software ecosystem.&lt;/p&gt;

&lt;p&gt;Main features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zoom users can simply import libraries and packages for quick implementation of the Zoom meeting platform into applications.&lt;/li&gt;
&lt;li&gt;Zoom supports seven major languages and provides open translation extensibility which opens any app to international growth and improved user experience.&lt;/li&gt;
&lt;li&gt;The Zoom Video SDK comes with fully customizable UI features that developers can expand and modify according to the requirements of their app.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Zoom’s Key Functionality Metrics
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Ease of integration:&lt;/strong&gt; Zoom offers two predefined roles: host and participants. Permissions for these roles cannot be modified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Error Handling:&lt;/strong&gt; Zoom SDKs come with in-built error handling and bandwidth management, as built into their consumer offering. Developers have to handle only minimal reconnection/bandwidth management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Scalability:&lt;/strong&gt; Zoom supports a maximum of 300 hosts in a call and a maximum of 1000 participants including the hosts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Cost of Support:&lt;/strong&gt; Zoom offers &lt;a href="https://explore.zoom.us/docs/en-us/support-plans.html"&gt;three customer support plans&lt;/a&gt;: Access, Premier, and Premier+. Pricing for these plans has to be obtained by contacting Zoom.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Plugins for easy feature development:&lt;/strong&gt; Currently, Zoom does not have a plugin marketplace for devs to use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Pricing:&lt;/strong&gt; Base pricing starts at $3.99. Zoom has a fairly straightforward, uncomplicated pricing policy.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Zoom’s Limitations&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Zoom is best suited for basic use cases, as it only allows the use of predetermined roles: host and participant. For use cases that require modified permissions for peers, Zoom may pose difficulties.&lt;/li&gt;
&lt;li&gt;The SDK’s footprint size is inordinately large.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  100ms: Pre-built, low-code audio-video templates
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is 100ms? What are its main offerings?
&lt;/h3&gt;

&lt;p&gt;100ms provides live audio-video SDKs that enable devs to add powerful, extensible, scalable, and resilient audio-video features into their apps with half a dozen lines of code. It abstracts the business logic of the conference room in &lt;a href="https://www.100ms.live/docs/javascript/v2/foundation/templates-and-roles"&gt;Templates and Roles&lt;/a&gt;. The client-side SDKs include all edge cases within the SDK rather than leaving it to the application side.&lt;/p&gt;

&lt;p&gt;The solution has been built by the team that powered live video infrastructure for some of the world’s largest live events &amp;amp; running billions of minutes a day at &lt;strong&gt;Facebook&lt;/strong&gt; and &lt;strong&gt;Disney+Hotstar&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Main Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The 100ms SDK provides pre-built templates to build virtual events, audio rooms, classrooms, and a wide range of use-cases with a few lines of code.&lt;/li&gt;
&lt;li&gt;The SDK is comprehensive, meaning that any piece of code which multiple application developers have to write repeatedly exists in the infrastructure layer.&lt;/li&gt;
&lt;li&gt;By virtue of room templates, the business logic remains on the server-side rather than being burned into client apps.&lt;/li&gt;
&lt;li&gt;The SDK is fully customizable and designed to be modified as required by any app’s UI.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  100ms’ Key Functionality Metrics
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Ease of integration:&lt;/strong&gt; 100ms SDKs are fully customizable. The publish-subscribe logic has been abstracted in the concept of &lt;a href="https://www.100ms.live/docs/javascript/v2/foundation/templates-and-roles"&gt;roles&lt;/a&gt;. Pre-defined roles are available, and users can customize roles as required with zero coding - right on the 100ms dashboard.&lt;/p&gt;

&lt;p&gt;Devs can use roles to build complex video applications with minimal coding. All the code resides on the SDK side. Write a few lines of code on the application side and go live.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Error Handling:&lt;/strong&gt; Pre-built disconnection handling is available in 100ms. Devs also benefit from edge case handling like granular device capture errors, in-built network degradation handling, automatic lowest latency server choice, and more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Scalability:&lt;/strong&gt; 100ms supports up to 10,000 participants and up to 100 hosts on regular video calls. It provides a single switch between WebRTC and HLS, which means that video calls can be streamed to millions via HLS with one click.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Cost of Support&lt;/strong&gt;: No extra cost is charged for support. Customers can access support via private Slack channels as well as Discord.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Plugins for easy feature development:&lt;/strong&gt; With the 100ms SDK, the most relevant features like whiteboard, hand-raising, media player, chat, and screen share are provided out of the box. Customers also get RTMP Streaming, HLS, and recording capabilities out of the box. Therefore, too many plugins may not be necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Pricing:&lt;/strong&gt; Base pricing starts at $4. Simple, easy-to-understand pricing policy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;100ms’ Limitations&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;No simulcast available&lt;/li&gt;
&lt;li&gt;No hosting servers in ANZ and South America.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>webrtc</category>
      <category>cpaas</category>
      <category>videoconferencing</category>
      <category>100ms</category>
    </item>
    <item>
      <title>A Comprehensive Guide to Flutter-WebRTC</title>
      <dc:creator>Nilay Jayswal</dc:creator>
      <pubDate>Mon, 21 Mar 2022 12:57:35 +0000</pubDate>
      <link>https://dev.to/100mslive/a-comprehensive-guide-to-flutter-webrtc-2od7</link>
      <guid>https://dev.to/100mslive/a-comprehensive-guide-to-flutter-webrtc-2od7</guid>
      <description>&lt;p&gt;So, you want to establish real-time audio and video in your Flutter app. This is a common requirement for app developers, especially post-pandemic, since everyone wants to interact online almost as easily as they do in real life. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This content was originally published - &lt;a href="https://www.100ms.live/blog/flutter-webrtc"&gt;HERE&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the most effective ways to go about this is to use WebRTC. &lt;/p&gt;

&lt;p&gt;This article will demonstrate how to use WebRTC and implement real-time audio-video communication in a Flutter app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This piece assumes that you are already familiar with app development in Flutter and will focus on WebRTC integration with a Flutter app. &lt;/p&gt;

&lt;h2&gt;
  
  
  How to build a Flutter WebRTC app
&lt;/h2&gt;

&lt;p&gt;We’ll start by building a new Flutter project and then go deeper into how WebRTC works. But first, let’s take a moment to answer the question: “&lt;strong&gt;What is WebRTC?&lt;/strong&gt;”&lt;/p&gt;

&lt;p&gt;WebRTC is an HTML specification that enables real-time, audio-video communication between websites and devices. It comprises networking, audio, and video components standardized by the &lt;a href="https://www.ietf.org/"&gt;Internet Engineering Task Force&lt;/a&gt; and the &lt;a href="https://www.w3.org/"&gt;World Wide Web Consortium&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Simply put, WebRTC is an open-source project that allows direct P2P communication without installing additional programs or plugins. It can be used on all modern browsers to establish peer-to-peer communication and can also be embedded into native applications using available libraries.&lt;/p&gt;

&lt;p&gt;We’ll discuss WebRTC in more detail later in the article. For now, let’s start building a Flutter WebRTC app. &lt;/p&gt;

&lt;p&gt;First of all, let’s create a new Flutter project&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flutter create webrtc_flutter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ll refactor some code and get rid of the comments. &lt;/p&gt;

&lt;p&gt;The first step is to add the Flutter WebRTC plugin based on Google WebRTC. Using this, we can develop the app in Flutter for mobile, desktop, and the web.&lt;/p&gt;

&lt;p&gt;Run the following code in the terminal to add &lt;code&gt;flutter_webrtc&lt;/code&gt; as a dependency in your &lt;code&gt;pubspec.yaml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flutter pub add flutter_webrtc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Access Camera and Microphone on iOS
&lt;/h2&gt;

&lt;p&gt;Add the following permission entry to your &lt;code&gt;Info.plist&lt;/code&gt; file, located in &lt;code&gt;&amp;lt;project root&amp;gt;/ios/Runner/Info.plist&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;key&amp;gt;&lt;/span&gt;NSCameraUsageDescription&lt;span class="nt"&gt;&amp;lt;/key&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;string&amp;gt;&lt;/span&gt;$(PRODUCT_NAME) Camera Usage!&lt;span class="nt"&gt;&amp;lt;/string&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;key&amp;gt;&lt;/span&gt;NSMicrophoneUsageDescription&lt;span class="nt"&gt;&amp;lt;/key&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;string&amp;gt;&lt;/span&gt;$(PRODUCT_NAME) Microphone Usage!&lt;span class="nt"&gt;&amp;lt;/string&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This entry allows your app to access the device’s camera and microphone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Android Manifest File Changes
&lt;/h2&gt;

&lt;p&gt;To enable user permissions on Android, add the following to the &lt;strong&gt;Android Manifest&lt;/strong&gt; file, located in &lt;code&gt;&amp;lt;project root&amp;gt;/android/app/src/main/AndroidManifest.xml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;uses-feature&lt;/span&gt; &lt;span class="na"&gt;android:name=&lt;/span&gt;&lt;span class="s"&gt;"android.hardware.camera"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;uses-feature&lt;/span&gt; &lt;span class="na"&gt;android:name=&lt;/span&gt;&lt;span class="s"&gt;"android.hardware.camera.autofocus"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;uses-permission&lt;/span&gt; &lt;span class="na"&gt;android:name=&lt;/span&gt;&lt;span class="s"&gt;"android.permission.CAMERA"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;uses-permission&lt;/span&gt; &lt;span class="na"&gt;android:name=&lt;/span&gt;&lt;span class="s"&gt;"android.permission.RECORD_AUDIO"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;uses-permission&lt;/span&gt; &lt;span class="na"&gt;android:name=&lt;/span&gt;&lt;span class="s"&gt;"android.permission.ACCESS_NETWORK_STATE"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;uses-permission&lt;/span&gt; &lt;span class="na"&gt;android:name=&lt;/span&gt;&lt;span class="s"&gt;"android.permission.CHANGE_NETWORK_STATE"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;uses-permission&lt;/span&gt; &lt;span class="na"&gt;android:name=&lt;/span&gt;&lt;span class="s"&gt;"android.permission.MODIFY_AUDIO_SETTINGS"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following if you wish to use a Bluetooth device:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;uses-permission&lt;/span&gt; &lt;span class="na"&gt;android:name=&lt;/span&gt;&lt;span class="s"&gt;"android.permission.BLUETOOTH"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;uses-permission&lt;/span&gt; &lt;span class="na"&gt;android:name=&lt;/span&gt;&lt;span class="s"&gt;"android.permission.BLUETOOTH_ADMIN"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  build.gradle Changes
&lt;/h2&gt;

&lt;p&gt;Currently, the official WebRTC jar uses static methods in the EglBase interface. So, you will need to change your build settings to Java 8. To do so, add the following code to &lt;code&gt;app-level build.gradle&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;android {
    //...
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If necessary, in the same &lt;code&gt;build.gradle&lt;/code&gt;, you may need to increase the &lt;code&gt;minSdkVersion&lt;/code&gt; of &lt;code&gt;defaultConfig&lt;/code&gt; to 23 (currently, the default Flutter generator has set it to 16).&lt;/p&gt;

&lt;p&gt;Our initial objective is to show the local user's video in the app. We’ll start from there and go on to connect to a remote user and establish a P2P connection using WebRTC. &lt;/p&gt;

&lt;p&gt;Let’s start by writing the Dart code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rendering a Local User
&lt;/h2&gt;

&lt;p&gt;Inside the stateful &lt;code&gt;MyHomePage()&lt;/code&gt; widget, we’ll initialize a &lt;code&gt;localVideoRenderer&lt;/code&gt; for the same. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;RTCVideoRenderer&lt;/code&gt; lets us play video frames obtained from the WebRTC video track. Depending on the video track source, it can play videos from a local peer or a remote one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;_localVideoRenderer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;RTCVideoRenderer&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="n"&gt;initRenderers&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_localVideoRenderer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;initialize&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;initRenderers&lt;/code&gt; function should now be called in the &lt;code&gt;initState()&lt;/code&gt; of the stateful widget.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="nd"&gt;@override&lt;/span&gt;
  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="n"&gt;initState&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;initRenderers&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;initState&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It also needs to be disposed of. Disposing of the renderer stops the video and releases the resources associated with it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="nd"&gt;@override&lt;/span&gt;
  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="n"&gt;dispose&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_localVideoRenderer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;dispose&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;dispose&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to get the user media as a stream. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;getUserMedia()&lt;/code&gt; function will prompt the user to allow an input device to be used. The said device will generate a Media Stream with the requested media types. This stream will have an audio track generated from an audio source such as a microphone (or others) and a video track from a camera, recording device, etc. &lt;/p&gt;

&lt;p&gt;Moving on, we create a new function named &lt;code&gt;getUserMedia()&lt;/code&gt; and call it in &lt;code&gt;initState()&lt;/code&gt; as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;_getUserMedia&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="kt"&gt;Map&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;dynamic&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;mediaConstraints&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s"&gt;'audio'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
      &lt;span class="s"&gt;'video'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;'facingMode'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'user'&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;};&lt;/span&gt;

    &lt;span class="n"&gt;MediaStream&lt;/span&gt; &lt;span class="n"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;navigator&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;mediaDevices&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getUserMedia&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mediaConstraints&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;_localVideoRenderer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;srcObject&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="nd"&gt;@override&lt;/span&gt;
  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="n"&gt;initState&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;initRenderers&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;_getUserMedia&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;initState&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last step is to complete our initial objective: use this local renderer in the UI to display the user’s video.&lt;/p&gt;

&lt;p&gt;Let us modify the build method of the stateful widget as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="nd"&gt;@override&lt;/span&gt;
  &lt;span class="n"&gt;Widget&lt;/span&gt; &lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BuildContext&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;Scaffold&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
      &lt;span class="nl"&gt;appBar:&lt;/span&gt; &lt;span class="n"&gt;AppBar&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
        &lt;span class="nl"&gt;title:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;widget&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
      &lt;span class="o"&gt;),&lt;/span&gt;
      &lt;span class="nl"&gt;body:&lt;/span&gt; &lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
        &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
          &lt;span class="n"&gt;Positioned&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
              &lt;span class="nl"&gt;top:&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
              &lt;span class="nl"&gt;right:&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
              &lt;span class="nl"&gt;left:&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
              &lt;span class="nl"&gt;bottom:&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
              &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;RTCVideoView&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_localVideoRenderer&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt;
        &lt;span class="o"&gt;],&lt;/span&gt;
      &lt;span class="o"&gt;),&lt;/span&gt;
    &lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that done, we can now run the app to check if the local video is being rendered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flutter WebRTC demo
&lt;/h2&gt;

&lt;p&gt;First of all, the app will ask for permission when we first run it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ewtiwi0F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/Flutter_Demo_Google_Chrome_3_14_2022_4_20_26_PM_892d6824f1/Flutter_Demo_Google_Chrome_3_14_2022_4_20_26_PM_892d6824f1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ewtiwi0F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/Flutter_Demo_Google_Chrome_3_14_2022_4_20_26_PM_892d6824f1/Flutter_Demo_Google_Chrome_3_14_2022_4_20_26_PM_892d6824f1.png" alt="User permission popup" width="418" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once loading is complete, the following screen should show up:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V_Luqbl0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/Flutter_Demo_Google_Chrome_3_15_2022_8_03_08_AM_06517ceccc/Flutter_Demo_Google_Chrome_3_15_2022_8_03_08_AM_06517ceccc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V_Luqbl0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/Flutter_Demo_Google_Chrome_3_15_2022_8_03_08_AM_06517ceccc/Flutter_Demo_Google_Chrome_3_15_2022_8_03_08_AM_06517ceccc.png" alt="Local video using WebRTC" width="880" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And now, you have successfully rendered the local video of the user on-screen. The next step is to render a remote user. But first, we need to understand connection and communication using WebRTC.&lt;/p&gt;

&lt;h2&gt;
  
  
  How WebRTC works
&lt;/h2&gt;

&lt;p&gt;WebRTC allows peer-to-peer communication over websites, apps, and devices, even though a peer may initially have no idea where the other peers are or how to connect or communicate with them.&lt;/p&gt;

&lt;p&gt;To establish a connection, WebRTC needs clients to exchange metadata in order to coordinate with them - which is done by a process called signaling. Signaling is also required to bypass firewalls and work with network address translators (NATs).&lt;/p&gt;

&lt;p&gt;Let’s dive a little further into the process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NAT (Network Address Translation)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All devices have a unique IP address. WebRTC uses this unique address to connect two peers. &lt;/p&gt;

&lt;p&gt;Due to the large number of connected devices in the modern world, unallocated &lt;strong&gt;IPv4&lt;/strong&gt; addresses are depleting. This has led to the deployment of a successor protocol, &lt;strong&gt;IPv6&lt;/strong&gt;. But, because of classful network addressing to &lt;strong&gt;Classless Inter-Domain Routing&lt;/strong&gt;, the exhaustion of these &lt;strong&gt;IPv4&lt;/strong&gt; addresses was substantially delayed.&lt;/p&gt;

&lt;p&gt;In addition, NAT or Network Address Translation helps with this. Instead of allocating public addresses to each network device, it allows Internet service providers and enterprises to masquerade as private network address space. Hence they have only one publicly routable IPv4 address on the Internet interface of the main Internet router.&lt;/p&gt;

&lt;p&gt;Because of NAT, we have two IP addresses. One is public, associated with the router, visible only from the outside. The other is a private address visible only to those connected to the same router.&lt;/p&gt;

&lt;p&gt;NAT mapping is what makes the connectivity of WebRTC possible. WebRTC leverages it to allow two peers in completely different subnets to communicate. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interactive Connectivity Establishment (ICE)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, it is not possible to connect two peers with just their IP addresses. To establish a connection, we’d have to first pass in the unique address to locate the peer on the internet. We’d also need a relay server for sharing media (if P2P connection is not allowed by the router), and finally, we’d have to get through the firewall.&lt;/p&gt;

&lt;p&gt;A framework called &lt;strong&gt;Interactive Connectivity Establishment (ICE)&lt;/strong&gt; is used to solve this.&lt;/p&gt;

&lt;p&gt;ICE is used to find the optimum path for connecting the peers. It examines the different ways to do so and chooses the best one.&lt;/p&gt;

&lt;h2&gt;
  
  
  About the Flutter WebRTC Server
&lt;/h2&gt;

&lt;p&gt;The ICE protocol tries to connect using the host address obtained from a device’s OS and network card. If that fails (which it will for devices behind NATs), ICE tries to get an external address using a &lt;strong&gt;STUN server&lt;/strong&gt;. If that fails, it uses a &lt;strong&gt;TURN relay server&lt;/strong&gt; to route traffic. Fundamentally, a TURN server is a STUN server with additional built-in relaying functionality.&lt;/p&gt;

&lt;p&gt;But, let’s look a bit further into them. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;STUN (Session Traversal Utilities for NAT)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An endpoint under a NAT with a local address is not reachable for other endpoints outside the local network. Hence a connection cannot be established. When this occurs, the endpoint can request a public IP address from a STUN server. Other endpoints can use this publicly reachable IP to establish a connection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TURN (Traversal Using Relays around NAT)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the name suggests, a TURN server is used as a relay or intermediate server to exchange data. If any endpoint under Symmetric NAT can contact a server on the public internet to establish a connection with another endpoint, it is called a TURN client. &lt;/p&gt;

&lt;p&gt;But, a disadvantage of using a TURN server is that it is required throughout the whole session, unlike the STUN server, which is not needed after the connection is established. Therefore, in the ICE technique, STUN is used as default.&lt;/p&gt;

&lt;p&gt;In other words, a STUN server is used to get an external network address. TURN servers are used to relay traffic if a direct (peer-to-peer) connection fails.&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to a Remote User
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eyxwKToO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/Untitled_design_7_9433a895b3/Untitled_design_7_9433a895b3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eyxwKToO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/Untitled_design_7_9433a895b3/Untitled_design_7_9433a895b3.png" alt="Untitled design (7).png" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To connect to a remote user, we’d have to make an offer from a local to a remote user and receive an answer to establish a connection.&lt;/p&gt;

&lt;p&gt;First, we ascertain media conditions, such as resolution and codec capabilities, locally. The metadata obtained will be used for the offer-and-answer mechanism. We’ll also get potential network addresses, known as candidates, for the app’s host.&lt;/p&gt;

&lt;p&gt;This local data then needs to be shared with the remote user. Following this, the steps below are executed to initiate a connection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The local user creates an offer which is essentially an SDP session description.&lt;/li&gt;
&lt;li&gt;This offer is stringified and sent over to a remote user.&lt;/li&gt;
&lt;li&gt;The remote user sets its remote description to the obtained offer and sends back an answer.&lt;/li&gt;
&lt;li&gt;The answer is used to set a remote description of the local user.&lt;/li&gt;
&lt;li&gt;Connection is established between the local and the remote user.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Networking information also needs to be exchanged between the two parties. The ICE framework is used to find network interfaces and ports.&lt;/p&gt;

&lt;p&gt;As we go along with the tutorial, we’ll understand this part better.&lt;/p&gt;

&lt;p&gt;Let’s start the process by modifying the code in the build method of the stateful widget to create a UI for the remote user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="nd"&gt;@override&lt;/span&gt;
  &lt;span class="n"&gt;Widget&lt;/span&gt; &lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BuildContext&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;Scaffold&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
        &lt;span class="nl"&gt;appBar:&lt;/span&gt; &lt;span class="n"&gt;AppBar&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
          &lt;span class="nl"&gt;title:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;widget&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
        &lt;span class="o"&gt;),&lt;/span&gt;
        &lt;span class="nl"&gt;body:&lt;/span&gt; &lt;span class="n"&gt;Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
          &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
            &lt;span class="n"&gt;videoRenderers&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
          &lt;span class="o"&gt;],&lt;/span&gt;
        &lt;span class="o"&gt;));&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;videoRenderers()&lt;/code&gt; function would render the media of both the local and the remote user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;SizedBox&lt;/span&gt; &lt;span class="nf"&gt;videoRenderers&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;SizedBox&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
        &lt;span class="nl"&gt;height:&lt;/span&gt; &lt;span class="mi"&gt;210&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
        &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Row&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
          &lt;span class="n"&gt;Flexible&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Container&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
              &lt;span class="nl"&gt;key:&lt;/span&gt; &lt;span class="n"&gt;Key&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'local'&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
              &lt;span class="nl"&gt;margin:&lt;/span&gt; &lt;span class="n"&gt;EdgeInsets&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;fromLTRB&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
              &lt;span class="nl"&gt;decoration:&lt;/span&gt; &lt;span class="n"&gt;BoxDecoration&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;color:&lt;/span&gt; &lt;span class="n"&gt;Colors&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;black&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
              &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;RTCVideoView&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_localVideoRenderer&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
            &lt;span class="o"&gt;),&lt;/span&gt;
          &lt;span class="o"&gt;),&lt;/span&gt;
          &lt;span class="n"&gt;Flexible&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Container&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
              &lt;span class="nl"&gt;key:&lt;/span&gt; &lt;span class="n"&gt;Key&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'remote'&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
              &lt;span class="nl"&gt;margin:&lt;/span&gt; &lt;span class="n"&gt;EdgeInsets&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;fromLTRB&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
              &lt;span class="nl"&gt;decoration:&lt;/span&gt; &lt;span class="n"&gt;BoxDecoration&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;color:&lt;/span&gt; &lt;span class="n"&gt;Colors&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;black&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
              &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;RTCVideoView&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_remoteVideoRenderer&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
            &lt;span class="o"&gt;),&lt;/span&gt;
          &lt;span class="o"&gt;),&lt;/span&gt;
        &lt;span class="o"&gt;]),&lt;/span&gt;
      &lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialise a &lt;code&gt;_remoteVideoRenderer&lt;/code&gt; as we did a &lt;code&gt;_localVideoRenderer&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;_remoteVideoRenderer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;RTCVideoRenderer&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need a few local variables and &lt;code&gt;TextEditingController&lt;/code&gt; to be used later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="n"&gt;sdpController&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;TextEditingController&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="kt"&gt;bool&lt;/span&gt; &lt;span class="n"&gt;_offer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;RTCPeerConnection&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="n"&gt;_peerConnection&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="n"&gt;MediaStream&lt;/span&gt;&lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="n"&gt;_localStream&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The &lt;code&gt;TextEditingController&lt;/code&gt; needs to be disposed of. Therefore we modify the &lt;code&gt;dispose()&lt;/code&gt; of the stateful widget as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;  &lt;span class="nd"&gt;@override&lt;/span&gt;
  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="n"&gt;dispose&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_localVideoRenderer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;dispose&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;sdpController&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;dispose&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;dispose&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To connect with the peer, we create a new function named &lt;code&gt;_createPeerConnection().&lt;/code&gt; Inside this function, we’ll first add the configuration as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kt"&gt;Map&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;dynamic&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;configuration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s"&gt;"iceServers"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
        &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"url"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"stun:stun.l.google.com:19302"&lt;/span&gt;&lt;span class="o"&gt;},&lt;/span&gt;
      &lt;span class="o"&gt;]&lt;/span&gt;
    &lt;span class="o"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration has the URL to the STUN server. In our example, we’ve used a free one made available by Google. You can check out &lt;a href="https://gist.github.com/mondain/b0ec1cf5f60ae726202e"&gt;a list of options for STUN servers here.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we add the SDP Constraints with the mandatory offer to receive audio/video set to true.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="kt"&gt;Map&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kt"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kd"&gt;dynamic&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;offerSdpConstraints&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="s"&gt;"mandatory"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;"OfferToReceiveAudio"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
        &lt;span class="s"&gt;"OfferToReceiveVideo"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
      &lt;span class="o"&gt;},&lt;/span&gt;
      &lt;span class="s"&gt;"optional"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="o"&gt;[],&lt;/span&gt;
    &lt;span class="o"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need to create a late variable named &lt;code&gt;_localStream&lt;/code&gt; of type &lt;strong&gt;MediaStream&lt;/strong&gt; inside our widget.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="kd"&gt;late&lt;/span&gt; &lt;span class="n"&gt;MediaStream&lt;/span&gt; &lt;span class="n"&gt;_localStream&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set this to the &lt;code&gt;_getUserMedia()&lt;/code&gt; function we had created earlier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt; &lt;span class="n"&gt;_localStream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_getUserMedia&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that done, we create an RTC peer connection with the configuration and SDP constraints as parameters and then add the local stream.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;    &lt;span class="n"&gt;RTCPeerConnection&lt;/span&gt; &lt;span class="n"&gt;pc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;createPeerConnection&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;configuration&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;offerSdpConstraints&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;pc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;addStream&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_localStream&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;pc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;onIceCandidate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;candidate&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;encode&lt;/span&gt;&lt;span class="o"&gt;({&lt;/span&gt;
          &lt;span class="s"&gt;'candidate'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;candidate&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
          &lt;span class="s"&gt;'sdpMid'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sdpMid&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
          &lt;span class="s"&gt;'sdpMlineIndex'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sdpMLineIndex&lt;/span&gt;
        &lt;span class="o"&gt;}));&lt;/span&gt;
      &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;};&lt;/span&gt;

    &lt;span class="n"&gt;pc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;onIceConnectionState&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;};&lt;/span&gt;

    &lt;span class="n"&gt;pc&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;onAddStream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'addStream: '&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
      &lt;span class="n"&gt;_remoteVideoRenderer&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;srcObject&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;};&lt;/span&gt;

 &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;pc&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lastly, we set the &lt;code&gt;remoteVideoRenderer&lt;/code&gt; source object to the stream obtained and return pc.&lt;/p&gt;

&lt;p&gt;Moving ahead, we make a &lt;code&gt;createOffer&lt;/code&gt; function to generate an offer and set the local description to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;_createOffer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;RTCSessionDescription&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_peerConnection&lt;/span&gt;&lt;span class="o"&gt;!.&lt;/span&gt;&lt;span class="na"&gt;createOffer&lt;/span&gt;&lt;span class="o"&gt;({&lt;/span&gt;&lt;span class="s"&gt;'offerToReceiveVideo'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;});&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parse&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sdp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toString&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
    &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;encode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
    &lt;span class="n"&gt;_offer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="n"&gt;_peerConnection&lt;/span&gt;&lt;span class="o"&gt;!.&lt;/span&gt;&lt;span class="na"&gt;setLocalDescription&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, we create a &lt;code&gt;createAnswer&lt;/code&gt; function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;_createAnswer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;RTCSessionDescription&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_peerConnection&lt;/span&gt;&lt;span class="o"&gt;!.&lt;/span&gt;&lt;span class="na"&gt;createAnswer&lt;/span&gt;&lt;span class="o"&gt;({&lt;/span&gt;&lt;span class="s"&gt;'offerToReceiveVideo'&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parse&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sdp&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toString&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
    &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;encode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;

    &lt;span class="n"&gt;_peerConnection&lt;/span&gt;&lt;span class="o"&gt;!.&lt;/span&gt;&lt;span class="na"&gt;setLocalDescription&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We add the function to set a remote description and add a candidate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;_setRemoteDescription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;jsonString&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sdpController&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;dynamic&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;jsonDecode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;$jsonString&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;sdp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;write&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;RTCSessionDescription&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
        &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;RTCSessionDescription&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sdp&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_offer&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="s"&gt;'answer'&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;'offer'&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toMap&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_peerConnection&lt;/span&gt;&lt;span class="o"&gt;!.&lt;/span&gt;&lt;span class="na"&gt;setRemoteDescription&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

  &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="n"&gt;_addCandidate&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="kd"&gt;async&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;String&lt;/span&gt; &lt;span class="n"&gt;jsonString&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sdpController&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;dynamic&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;jsonDecode&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;$jsonString&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;print&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'candidate'&lt;/span&gt;&lt;span class="o"&gt;]);&lt;/span&gt;
    &lt;span class="kd"&gt;dynamic&lt;/span&gt; &lt;span class="n"&gt;candidate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;RTCIceCandidate&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'candidate'&lt;/span&gt;&lt;span class="o"&gt;],&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'sdpMid'&lt;/span&gt;&lt;span class="o"&gt;],&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'sdpMlineIndex'&lt;/span&gt;&lt;span class="o"&gt;]);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_peerConnection&lt;/span&gt;&lt;span class="o"&gt;!.&lt;/span&gt;&lt;span class="na"&gt;addCandidate&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;candidate&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only thing left now is to create a user interface to call the above functions. We make a basic layout using a Row and a Column to arrange a &lt;code&gt;TextField&lt;/code&gt; for input and buttons to call functions.&lt;/p&gt;

&lt;p&gt;Modify the build function as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="nd"&gt;@override&lt;/span&gt;
  &lt;span class="n"&gt;Widget&lt;/span&gt; &lt;span class="n"&gt;build&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BuildContext&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;Scaffold&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
        &lt;span class="nl"&gt;appBar:&lt;/span&gt; &lt;span class="n"&gt;AppBar&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
          &lt;span class="nl"&gt;title:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;widget&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
        &lt;span class="o"&gt;),&lt;/span&gt;
        &lt;span class="nl"&gt;body:&lt;/span&gt; &lt;span class="n"&gt;Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
          &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
            &lt;span class="n"&gt;videoRenderers&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;Row&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
              &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
                &lt;span class="n"&gt;Padding&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                  &lt;span class="nl"&gt;padding:&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="n"&gt;EdgeInsets&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;all&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;16.0&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
                  &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;SizedBox&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                    &lt;span class="nl"&gt;width:&lt;/span&gt; &lt;span class="n"&gt;MediaQuery&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;of&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;width&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                    &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;TextField&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                      &lt;span class="nl"&gt;controller:&lt;/span&gt; &lt;span class="n"&gt;sdpController&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                      &lt;span class="nl"&gt;keyboardType:&lt;/span&gt; &lt;span class="n"&gt;TextInputType&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;multiline&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                      &lt;span class="nl"&gt;maxLines:&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                      &lt;span class="nl"&gt;maxLength:&lt;/span&gt; &lt;span class="n"&gt;TextField&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;noMaxLength&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                    &lt;span class="o"&gt;),&lt;/span&gt;
                  &lt;span class="o"&gt;),&lt;/span&gt;
                &lt;span class="o"&gt;),&lt;/span&gt;
                &lt;span class="n"&gt;Column&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                  &lt;span class="nl"&gt;crossAxisAlignment:&lt;/span&gt; &lt;span class="n"&gt;CrossAxisAlignment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;center&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                  &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
                    &lt;span class="n"&gt;ElevatedButton&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                      &lt;span class="nl"&gt;onPressed:&lt;/span&gt; &lt;span class="n"&gt;_createOffer&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Offer"&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
                    &lt;span class="o"&gt;),&lt;/span&gt;
                    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="n"&gt;SizedBox&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                      &lt;span class="nl"&gt;height:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                    &lt;span class="o"&gt;),&lt;/span&gt;
                    &lt;span class="n"&gt;ElevatedButton&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                      &lt;span class="nl"&gt;onPressed:&lt;/span&gt; &lt;span class="n"&gt;_createAnswer&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Answer"&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
                    &lt;span class="o"&gt;),&lt;/span&gt;
                    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="n"&gt;SizedBox&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                      &lt;span class="nl"&gt;height:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                    &lt;span class="o"&gt;),&lt;/span&gt;
                    &lt;span class="n"&gt;ElevatedButton&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                      &lt;span class="nl"&gt;onPressed:&lt;/span&gt; &lt;span class="n"&gt;_setRemoteDescription&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Set Remote Description"&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
                    &lt;span class="o"&gt;),&lt;/span&gt;
                    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="n"&gt;SizedBox&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                      &lt;span class="nl"&gt;height:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                    &lt;span class="o"&gt;),&lt;/span&gt;
                    &lt;span class="n"&gt;ElevatedButton&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
                      &lt;span class="nl"&gt;onPressed:&lt;/span&gt; &lt;span class="n"&gt;_addCandidate&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
                      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Set Candidate"&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt;
                    &lt;span class="o"&gt;),&lt;/span&gt;
                  &lt;span class="o"&gt;],&lt;/span&gt;
                &lt;span class="o"&gt;)&lt;/span&gt;
              &lt;span class="o"&gt;],&lt;/span&gt;
            &lt;span class="o"&gt;),&lt;/span&gt;
          &lt;span class="o"&gt;],&lt;/span&gt;
        &lt;span class="o"&gt;));&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll notice we have created elevated buttons to generate an offer, answer, set remote description, and set candidate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Flutter WebRTC Project Demo
&lt;/h2&gt;

&lt;p&gt;Now, we can go ahead and test our app features. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/adityathakurxd/webrtc_flutter"&gt;Refer to the complete code here.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the Flutter project. It should show up as seen below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dabh-Im5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/Flutter_Demo_Google_Chrome_3_15_2022_7_51_02_AM_43cbe24954/Flutter_Demo_Google_Chrome_3_15_2022_7_51_02_AM_43cbe24954.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dabh-Im5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/Flutter_Demo_Google_Chrome_3_15_2022_7_51_02_AM_43cbe24954/Flutter_Demo_Google_Chrome_3_15_2022_7_51_02_AM_43cbe24954.png" alt="Flutter web-app demo" width="817" height="987"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can copy the URL and paste it into a new browser window to create two peers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--enEsSTBO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/2022_03_15_2_8bd399090f/2022_03_15_2_8bd399090f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--enEsSTBO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/2022_03_15_2_8bd399090f/2022_03_15_2_8bd399090f.png" alt="P2P Flutter WebRTC demo" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the ‘&lt;strong&gt;Offer&lt;/strong&gt;’ button in window 1 to generate an offer. This should output the offer to the console accessible using developer tools on Chrome.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PkiGhNoV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/2022_03_15_4_487cd15f78/2022_03_15_4_487cd15f78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PkiGhNoV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/2022_03_15_4_487cd15f78/2022_03_15_4_487cd15f78.png" alt="Sharing the offer" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy-paste this offer into the &lt;code&gt;TextField&lt;/code&gt; of window 2 and click on the ‘&lt;strong&gt;Set Remote Description&lt;/strong&gt;’ button. That should set the remote description by calling the underlying function.&lt;/p&gt;

&lt;p&gt;Now, click on ‘&lt;strong&gt;Answer&lt;/strong&gt;’ to generate an answer and copy-paste it in the &lt;code&gt;TextField&lt;/code&gt; of window 1.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dK4kGJ7T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/2022_03_15_5_b0b81f2008/2022_03_15_5_b0b81f2008.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dK4kGJ7T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/2022_03_15_5_b0b81f2008/2022_03_15_5_b0b81f2008.png" alt="Sharing the answer" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lastly, copy a candidate from window 2, paste it in the window 1 &lt;code&gt;TextField&lt;/code&gt;, and click on the ‘&lt;strong&gt;Set Candidate&lt;/strong&gt;’ button. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HzHh8DCn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/2022_03_15_6_107dac8311/2022_03_15_6_107dac8311.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HzHh8DCn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/2022_03_15_6_107dac8311/2022_03_15_6_107dac8311.png" alt="Sharing a candidate" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This should create the WebRTC connection, and the remote video should show up as seen below: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LKI7ssE3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/2022_03_15_7_a44efcd7ae/2022_03_15_7_a44efcd7ae.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LKI7ssE3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/2022_03_15_7_a44efcd7ae/2022_03_15_7_a44efcd7ae.png" alt="P2P done using WebRTC" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And, that’s it. We have successfully created an application using the WebRTC plugin for Flutter. As you can see,  the process of setting up a Flutter WebRTC app may seem a little complicated, but once you get the hang of it, it’s not too difficult to accomplish. &lt;/p&gt;

&lt;h2&gt;
  
  
  Using a Signaling Server
&lt;/h2&gt;

&lt;p&gt;You must’ve noticed we had to manually copy and paste the offer and answer in the app. Alternatively, you can use a Signaling Server to help communicate between the parties and establish a connection. &lt;/p&gt;

&lt;p&gt;A signaling server helps coordinate communication by exchanging information. WebRTC needs some exchange of information for clients to set up calls. Signaling allows clients to pass messages between each other.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T085SWID--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/NAT_8431527744/NAT_8431527744.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T085SWID--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://storage.googleapis.com/100ms-cms-dev/cms/NAT_8431527744/NAT_8431527744.png" alt="Why Signaling Server is required" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The exchanged information is then used to overcome challenges in communication. WebRTC uses the ICE Framework to cope with NATs and firewalls, but the information of candidates, the offer, and the answer need to be exchanged via signaling.&lt;/p&gt;

&lt;p&gt;For our example, we demonstrated a simple Flutter app using a free STUN Server and manual signaling by copying-pasting the offer, answer, and a candidate. In production, you may want to set up your own Flutter WebRTC servers, but scaling and maintaining them can get tedious really fast. &lt;/p&gt;

&lt;p&gt;As a developer, I would not want to deal with the intricacies of server-side code. It’s extra effort, and I’d rather focus on what I do best - building UIs with Flutter and providing my users with a better experience. That is what 100ms helps me with.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integrate WebRTC with Flutter via 100ms
&lt;/h2&gt;

&lt;p&gt;100ms makes it really easy to integrate audio/video into my Flutter app without sacrificing the customization that’d I have with WebRTC. I can continue building my apps without worrying about scalability or having to set up or manage servers of my own.&lt;/p&gt;

&lt;p&gt;The 100ms SDK helps make app development more efficient and requires far less coding effort than usual. If you’re curious, use the &lt;a href="https://pub.dev/packages/hmssdk_flutter"&gt;100ms plugin&lt;/a&gt; instead of the Flutter WebRTC plugin, and you’ll see how it makes your life easier. &lt;/p&gt;

&lt;p&gt;Or, you could also try this quickstart guide on &lt;a href="https://www.100ms.live/docs/flutter/v2/guides/quickstart"&gt;creating a Demo Flutter Project with the 100ms SDK&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.100ms.live/docs/flutter/v2/features/integration"&gt;100ms Flutter SDK Integration Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.100ms.live/docs/flutter/v2/foundation/basics"&gt;Basic Concepts about working with Flutter and 100ms&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.100ms.live/blog/omegle-clone-in-flutter"&gt;Build an Omegle clone in Flutter using 100ms&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.100ms.live/blog/zoom-clone-in-flutter"&gt;Build a Zoom clone in Flutter using 100ms&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.100ms.live/blog/building-clubhouse-clone-using-100ms-in-flutter"&gt;Build a Clubhouse clone in Flutter using 100ms&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>flutter</category>
      <category>webrtc</category>
    </item>
  </channel>
</rss>
