<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Braden Riggs</title>
    <description>The latest articles on DEV Community by Braden Riggs (@bradenriggs).</description>
    <link>https://dev.to/bradenriggs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bradenriggs"/>
    <language>en</language>
    <item>
      <title>How-to Broadcast a WebRTC stream to Twitch</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Tue, 01 Aug 2023 16:03:31 +0000</pubDate>
      <link>https://dev.to/dolbyio/how-to-broadcast-a-webrtc-stream-to-twitch-7fa</link>
      <guid>https://dev.to/dolbyio/how-to-broadcast-a-webrtc-stream-to-twitch-7fa</guid>
      <description>&lt;p&gt;Recently, while exploring &lt;a href="https://docs.dolby.io/streaming-apis/docs/webrtc-whip" rel="noopener noreferrer"&gt;syndicating Dolby.io WebRTC&lt;/a&gt; streams, I learned that &lt;a href="https://www.linkedin.com/posts/sean-dubois_twitch-activity-7053056800861933568-TTPW/" rel="noopener noreferrer"&gt;Twitch has added support for WebRTC Ingest&lt;/a&gt; or &lt;a href="https://datatracker.ietf.org/doc/draft-ietf-wish-whip/" rel="noopener noreferrer"&gt;WHIP&lt;/a&gt; as it is known in the industry.&lt;/p&gt;

&lt;p&gt;WebRTC for streaming is an exciting choice because it can decrease stream latency compared to traditional protocols such as RTMP and HLS. When ingested, Twitch will transmux the WebRTC stream into something the platform supports (HLS), so that adds latency, slowing down the feed.&lt;/p&gt;

&lt;p&gt;With that said, WHIP support is a great step for the community and with OBS now adding support for WebRTC, I thought I'd have to try it out.&lt;/p&gt;

&lt;p&gt;In this guide, we'll showcase how to stream WebRTC from OBS into Twitch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up OBS for WebRTC
&lt;/h2&gt;

&lt;p&gt;The core OBS project is working to add WebRTC, however, at the moment it is still an experimental build. You can try out this build by downloading the version relevant to your system &lt;a href="https://github.com/obsproject/obs-studio/actions/runs/5227109208" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once downloaded, extract the project and install it. &lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming WebRTC from OBS to Twitch
&lt;/h2&gt;

&lt;p&gt;With the project installed and launched, navigate to: &lt;br&gt;
&lt;code&gt;Settings -&amp;gt; Stream&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Inside of &lt;code&gt;Stream&lt;/code&gt; select &lt;code&gt;WHIP&lt;/code&gt; as your service:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzziaxvtcx6wnwbpw75am.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzziaxvtcx6wnwbpw75am.png" alt="The WHIP settings in OBS for WebRTC streaming" width="800" height="620"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To start a WebRTC stream to Twitch you need the &lt;code&gt;Server&lt;/code&gt; path and your &lt;code&gt;Stream key&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Twitch WHIP server
&lt;/h3&gt;

&lt;p&gt;The server is (&lt;em&gt;currently&lt;/em&gt;) the same for everyone:&lt;br&gt;
&lt;code&gt;https://g.webrtc.live-video.net:4443/v2/offer&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Note:&lt;/strong&gt; This server currently only supports H264 and Opus encoded streams.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Your Twitch Stream Key
&lt;/h3&gt;

&lt;p&gt;Your Twitch Stream Key can be found on your &lt;a href="https://dashboard.twitch.tv/" rel="noopener noreferrer"&gt;dashboard&lt;/a&gt; once you've logged in, under:&lt;br&gt;
&lt;code&gt;settings -&amp;gt; stream&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22v9s5k3vdu0djamhb7p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F22v9s5k3vdu0djamhb7p.png" alt="Your Twitch API Stream Key on the Dashboard" width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy both the &lt;code&gt;Server&lt;/code&gt; URL and the &lt;code&gt;Stream Key&lt;/code&gt; into the &lt;code&gt;Server&lt;/code&gt; and &lt;code&gt;Bearer Token&lt;/code&gt; inputs within OBS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37isoynu2aofbef98eht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F37isoynu2aofbef98eht.png" alt="Twitch credentials added to OBS for WHIP streaming" width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;code&gt;Apply&lt;/code&gt;, set up OBS as usual, and click &lt;code&gt;Start Stream&lt;/code&gt; to begin your WebRTC broadcast to Twitch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugj1ya07nxf2dgi8qg5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fugj1ya07nxf2dgi8qg5h.png" alt="Braden Riggs broadcasting a WebRTC stream from OBS to Twitch" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Learn More
&lt;/h3&gt;

&lt;p&gt;Broadcasting a WebRTC stream to Twitch is an great feature for the site as it allows people to easily &lt;a href="https://docs.dolby.io/streaming-apis/docs/syndication" rel="noopener noreferrer"&gt;syndicate their WebRTC streams&lt;/a&gt; to a popular platform. Because Twitch transmuxes the WebRTC stream, some delay is added, so if you're looking for an end-to-end white-label real-time streaming solution, check out &lt;a href="https://dolby.io/products/real-time-streaming/" rel="noopener noreferrer"&gt;Dolby.io Real-time Streaming&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A special shout out to &lt;a href="https://www.linkedin.com/in/sean-dubois/" rel="noopener noreferrer"&gt;Sean DuBois&lt;/a&gt; for his work on both the OBS project and on Twitch's WHIP support.&lt;/p&gt;

</description>
      <category>webrtc</category>
      <category>twitch</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A Low-Latency Live Stream React App</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Mon, 03 Apr 2023 18:36:01 +0000</pubDate>
      <link>https://dev.to/dolbyio/a-low-latency-live-stream-react-app-53pj</link>
      <guid>https://dev.to/dolbyio/a-low-latency-live-stream-react-app-53pj</guid>
      <description>&lt;p&gt;&lt;a href="https://dolby.io/blog/a-low-latency-live-stream-react-app/" rel="noopener noreferrer"&gt;Original Article Published Here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When building a streaming app or platform it is important to consider how the end user experiences and engages with the content being streamed. If your users need to engage with the content creator, the delay between capture and consumption should be minimal. To achieve this, many developers rely on WebRTC, a content-over-internet transfer protocol that boasts exceptionally low delays for video and audio. By leveraging WebRTC, developers can quickly build a low-delay immersive experience, leaving plenty of time to make the UI look outstanding using front-end libraries such as ReactJS.&lt;/p&gt;

&lt;p&gt;In this guide, we're going to showcase a WebRTC ReactJS streaming app powered by &lt;a href="https://dolby.io/products/real-time-streaming/" rel="noopener noreferrer"&gt;Dolby.io Streaming&lt;/a&gt; and NodeJS. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxeu6637sxbkjwb7rvkvp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxeu6637sxbkjwb7rvkvp.jpg" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The WebRTC React Example Code
&lt;/h2&gt;

&lt;p&gt;The WebRTC React Streaming example app can be found on the &lt;a href="https://github.com/dolbyio-samples/rts-app-react-publisher-viewer" rel="noopener noreferrer"&gt;dolbyio-samples GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To set up the project you need four things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; A cloned &lt;a href="https://github.com/dolbyio-samples/rts-app-react-publisher-viewer" rel="noopener noreferrer"&gt;copy of the sample app&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt; &lt;a href="https://nodejs.org/en" rel="noopener noreferrer"&gt;Node v16 or greater&lt;/a&gt; installed.&lt;/li&gt;
&lt;li&gt; &lt;a href="https://yarnpkg.com/" rel="noopener noreferrer"&gt;Yarn package&lt;/a&gt; manager v1.22.19 or greater installed.&lt;/li&gt;
&lt;li&gt; &lt;a href="https://dashboard.dolby.io/signup" rel="noopener noreferrer"&gt;A Dolby.io account&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you've cloned the repo and set up the Node and Yarn, navigate to the main directory via the terminal and run the following command to install all dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While your project is installing we can briefly discuss how to set up your Dolby.io account. Once you've &lt;a href="https://dashboard.dolby.io/signup/" rel="noopener noreferrer"&gt;created an account&lt;/a&gt; you'll be dropped off on the &lt;a href="https://streaming.dolby.io/#/tokens" rel="noopener noreferrer"&gt;Dolby.io Dashboard&lt;/a&gt; where you can create and manage tokens required for leveraging Dolby.io Streaming servers.&lt;/p&gt;

&lt;p&gt;Click the purple and white &lt;em&gt;+ Create&lt;/em&gt; button to create a new token. Give the token a label and your stream name a unique identifier, then switch to the Advanced tab to enable "Multisource" as shown in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxog78o77lf8rtcdm8ba8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxog78o77lf8rtcdm8ba8.png" width="330" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ogpgk84b9bjmy2h3ise.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ogpgk84b9bjmy2h3ise.png" width="333" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enabling &lt;a href="https://docs.dolby.io/streaming-apis/docs/multisource-streams" rel="noopener noreferrer"&gt;Multisource&lt;/a&gt; allows you to leverage Dolby.io Streaming to capture and deliver multiple low-delay streams at once. With your Token created, we can now click on the newly created token and gather your &lt;em&gt;stream name&lt;/em&gt;, &lt;em&gt;stream account id&lt;/em&gt;, and &lt;em&gt;stream publishing token&lt;/em&gt; as shown in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48vlmkti8dq50tshwxor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48vlmkti8dq50tshwxor.png" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have all the credentials required to connect to the Dolby.io servers, let's update the project credentials. To do this we need to edit the &lt;code&gt;.env.example,&lt;/code&gt; which can be found inside of &lt;code&gt;apps/publisher/&lt;/code&gt; and &lt;code&gt;apps/viewer/,&lt;/code&gt; by renaming the file to &lt;code&gt;.env&lt;/code&gt; and populating the file with the respective credentials.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprnq387yad19wogn2k45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprnq387yad19wogn2k45.png" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally inside of the &lt;code&gt;apps/publisher/.env.example&lt;/code&gt; there is a parameter to adjust the viewer URL. For testing locally this can be set to a &lt;a href="https://www.hostinger.com/tutorials/what-is-localhost" rel="noopener noreferrer"&gt;local host URL&lt;/a&gt;, however, in production, this should be a web-accessible endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;VITE_RTS_VIEWER_BASE_URL=http://localhost:7070/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With all the credentials set up, we can now run the React streaming app. The app can be split into two functions, a publisher and a viewer. The publisher app, which is what a content creator would use, serves content to the end user who participates via the viewer app.&lt;/p&gt;

&lt;p&gt;To start the publisher app experience:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn nx serve publisher
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbj65ui56x47s4yoj0ug.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbj65ui56x47s4yoj0ug.jpg" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To start the viewer app experience:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn nx serve viewer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9bygcdyt8a12zvoo9gi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe9bygcdyt8a12zvoo9gi.jpg" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With both the viewer and the publisher running we now have a live streaming app with Node.js and React powered by Dolby.io WebRTC Streaming. This experience can be &lt;a href="https://www.netlify.com/with/react/" rel="noopener noreferrer"&gt;deployed on a cloud service such as Netlify&lt;/a&gt; to share publicly, just remember to add your branding and styling.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building your own React WebRTC streaming app
&lt;/h3&gt;

&lt;p&gt;With your Dolby.io account already created, building your own bespoke viewer and publisher experience is easy. Dolby.io Streaming has a &lt;a href="https://docs.dolby.io/streaming-apis/docs/rn" rel="noopener noreferrer"&gt;React Native SDK&lt;/a&gt; allowing developers to quickly and easily build a streaming solution. If you are interested in learning more about Dolby.io Streaming check out some of our other blogs including building a &lt;a href="https://dolby.io/blog/building-a-webrtc-live-stream-multiviewer-app/" rel="noopener noreferrer"&gt;Multiview web app&lt;/a&gt; or about our &lt;a href="https://www.youtube.com/watch?v=jUP4vyzbu5Y" rel="noopener noreferrer"&gt;Dolby.io Streaming OBS integration&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Feedback or Questions? Reach out to the team on &lt;a href="https://twitter.com/DolbyIO?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/dolbyio/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, or via our &lt;a href="https://www.millicast.com/contactus/" rel="noopener noreferrer"&gt;support desk&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>react</category>
      <category>webrtc</category>
      <category>javascript</category>
    </item>
    <item>
      <title>5 Different ways to Broadcast SRT Streams</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Mon, 09 Jan 2023 19:16:36 +0000</pubDate>
      <link>https://dev.to/dolbyio/4-different-ways-to-broadcast-srt-streams-21jj</link>
      <guid>https://dev.to/dolbyio/4-different-ways-to-broadcast-srt-streams-21jj</guid>
      <description>&lt;p&gt;&lt;a href="https://dolby.io/blog/broadcasting-srt-streams-with-dolby-io/" rel="noopener noreferrer"&gt;Originally published here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SRT, or Secure Reliable Transport, is a type of streaming protocol that provides enhanced security and reliability for video streaming. SRT is becoming increasingly popular among broadcasters and streamers including industry stalwarts such as ESPN because of its ability to deliver high-quality content over challenging network conditions and for its ability to &lt;a href="https://dolby.io/solutions/remote-production-remi/" rel="noopener noreferrer"&gt;make contribution and stream ingestion easy&lt;/a&gt;. SRT streams provide improved security, low latency, and flexibility and is supported by a global community of developers all contributing to the &lt;a href="https://github.com/Haivision/srt" rel="noopener noreferrer"&gt;open-source project&lt;/a&gt;. Because of the power of SRT streams,&lt;a href="https://dolby.io/products/real-time-streaming/" rel="noopener noreferrer"&gt; Dolby.io Real-Time Streaming&lt;/a&gt; has decided to launch support with an &lt;a href="https://docs.dolby.io/streaming-apis/docs/using-srt" rel="noopener noreferrer"&gt;SRT open beta program&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this guide, we'll cover a few different ways you can start broadcasting SRT streams with Dolby.io such as OBS, vMix and many more:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dolby.io/blog/broadcasting-srt-streams-with-dolby-io/#h-streaming-srt-with-obs" rel="noopener noreferrer"&gt;Streaming SRT with OBS&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dolby.io/blog/broadcasting-srt-streams-with-dolby-io/#streaming-srt-vmix" rel="noopener noreferrer"&gt;Streaming SRT with vMix&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dolby.io/blog/broadcasting-srt-streams-with-dolby-io/#srt-iphone" rel="noopener noreferrer"&gt;Streaming SRT with your iPhone&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dolby.io/blog/collaborative-post-production-with-avid-media-composer" rel="noopener noreferrer"&gt;Streaming SRT with Avid Media Composer&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dolby.io/blog/broadcasting-srt-streams-with-dolby-io/#srt-osprey" rel="noopener noreferrer"&gt;Streaming SRT Directly from an Osprey Talon Encoder &lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming SRT with OBS
&lt;/h2&gt;

&lt;p&gt;For readers familiar with the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; platform you might know about &lt;a href="https://dolby.io/blog/using-webrtc-in-obs-for-remote-live-production/" rel="noopener noreferrer"&gt;our custom forked version of OBS&lt;/a&gt; designed to stream WebRTC natively. Although you can use our WebRTC-enabled OBS fork, you can actually publish SRT streams to the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; servers from the original OBS project. To do this you must have an &lt;a href="https://dashboard.dolby.io/signup" rel="noopener noreferrer"&gt;active Dolby.io account, which you can create for free&lt;/a&gt; and the &lt;a href="https://obsproject.com/" rel="noopener noreferrer"&gt;latest version of OBS installed on your system&lt;/a&gt;. To start publishing SRT streams with OBS follow the steps below:&lt;/p&gt;

&lt;p&gt;1. &lt;a href="https://dashboard.dolby.io/signin" rel="noopener noreferrer"&gt;Login&lt;/a&gt; or&lt;a href="https://dashboard.dolby.io/signup" rel="noopener noreferrer"&gt; create a Dolby.io account&lt;/a&gt; and &lt;a href="https://obsproject.com/" rel="noopener noreferrer"&gt;download OBS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;2. Navigate to your Dolby.io streaming dashboard and create a new token. You can leave all the token settings to default.&lt;/p&gt;

&lt;p&gt;3. Open the API tab on your newly created token dashboard and navigate to the bottom where you'll see the &lt;code&gt;SRT publish path&lt;/code&gt;, the &lt;code&gt;SRT stream ID&lt;/code&gt;, and the &lt;code&gt;SRT publish URL&lt;/code&gt;. Copy &lt;code&gt;SRT publish URL&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fsrtbox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fsrtbox.png" alt="Pictured is a screenshot of Dolby.io Streaming Token API tab. Highlighted on screen in a red box is the SRT publish URL used in OBS." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Dolby.io Streaming Token API tab. Highlighted box indicates the SRT publish URL used in OBS.&lt;/p&gt;

&lt;p&gt;4. Open OBS and navigate to settings, then the &lt;code&gt;Stream&lt;/code&gt; tab.&lt;/p&gt;

&lt;p&gt;5. Inside of the &lt;code&gt;Stream&lt;/code&gt; tab, set &lt;code&gt;Service&lt;/code&gt; to &lt;code&gt;Custom&lt;/code&gt; and &lt;code&gt;Server&lt;/code&gt; to the &lt;code&gt;SRT publish URL&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fobssrt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fobssrt.jpg" alt="Pictured is a screenshot of the black and grey OBS stream settings page. On screen the Service is set to " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OBS stream settings page. Remember to set Service to "Custom" and Server to "Your SRT Publish URL".&lt;/p&gt;

&lt;p&gt;6. Apply the changes and exit settings. You are now all set up to stream with OBS. When publishing, your SRT stream will be delivered to the Dolby.io Streaming Viewer, which can be found at the Hosted Player Path.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fhosted-player.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fhosted-player.png" alt="Pictured is a screenshot of the Doby.io Streaming Token API tab, with hosted player path highlighted in a red box. " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Dolby.io Streaming Token API tab, with hosted player path highlighted. Opening this path in a browser will launch the stream.&lt;/p&gt;

&lt;p&gt;Although the hosted player path is a great way to view the stream, you can use the &lt;a href="https://dolby.io/blog/building-a-low-latency-livestream-viewer-with-webrtc-millicast/" rel="noopener noreferrer"&gt;Dolby.io Streaming JavaScript SDK&lt;/a&gt; to build a bespoke solution.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: If you are using the &lt;code&gt;NVIDIA NVENC H.264&lt;/code&gt; encoder that comes included with OBS you must set &lt;code&gt;Max B-Frames&lt;/code&gt; to &lt;code&gt;0&lt;/code&gt;. This setting can be found in Output, then Advanced Output Mode, then the Streaming tab, where Encoder is set to &lt;code&gt;NVIDIA NVENC H.264&lt;/code&gt; and then Max B-frames is set to 0.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2FB-frames.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2FB-frames.png" alt="If you are using the NVIDIA NVENC H.264 encoder that comes included with OBS you must set Max B-Frames to 0. Image depicts this fix in the settings which can be found in Output, then Advanced Output Mode, then the Streaming tab, where Encoder is set to NVIDIA NVENC H.264 and then Max B-frames is set to 0. Image depicts each of these settings highlighted in red boxes for clarity." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are using the &lt;code&gt;NVIDIA NVENC H.264&lt;/code&gt; encoder that comes included with OBS you must set &lt;code&gt;Max B-Frames&lt;/code&gt; to &lt;code&gt;0&lt;/code&gt;. This setting can be found in Output, then Advanced Output Mode, then the Streaming tab, where Encoder is set to &lt;code&gt;NVIDIA NVENC H.264&lt;/code&gt; and then Max B-frames is set to 0.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Streaming SRT with vMix
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.vmix.com/" rel="noopener noreferrer"&gt;vMix&lt;/a&gt; is a paid windows-only remote production tool used for vision mixing. It allows users to juggle input and outputs for live broadcasts and productions and includes support for publishing SRT streams. To publish an SRT stream with vMix follow the steps below:&lt;/p&gt;

&lt;p&gt;1. &lt;a href="https://dashboard.dolby.io/signin" rel="noopener noreferrer"&gt;Login&lt;/a&gt; or&lt;a href="https://dashboard.dolby.io/signup" rel="noopener noreferrer"&gt; create a Dolby.io account&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;2. &lt;a href="https://www.vmix.com/" rel="noopener noreferrer"&gt;Download and open vMix&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;3. Navigate to your &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; streaming dashboard and create a new token. You can leave all the token settings to default.&lt;/p&gt;

&lt;p&gt;4. Open the API tab on your newly created token dashboard and navigate to the bottom where you'll see the &lt;code&gt;SRT publish path&lt;/code&gt;, &lt;code&gt;SRT stream ID&lt;/code&gt;, and the &lt;code&gt;SRT publish URL&lt;/code&gt;. Copy the &lt;code&gt;SRT publish path&lt;/code&gt; and the &lt;code&gt;SRT stream ID&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;5. Inside of vMix open &lt;code&gt;settings&lt;/code&gt; and switch to &lt;code&gt;Output / NDI / SRT&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fvmix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fvmix.png" alt="Pictured is a screenshot of the vMix mixing stage. Highlighted in a red box is the settings users should click on." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The vMix mixing stage. Navigate to "Settings" and click on "Outputs / NDI / SRT" to open up the SRT settings menu.&lt;/p&gt;

&lt;p&gt;6. Once you've switched to &lt;code&gt;Output / NDI / SRT&lt;/code&gt; open the gear icon next to an output source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fvmixsettings.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fvmixsettings.png" alt="Pictured is a screenshot of vMix settings with the SRT settings tab highlighted in red and the gear icon next to output 1 highlighted in red." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inside the SRT settings, select the gear icon highlighted in red.&lt;/p&gt;

&lt;p&gt;7. Inside the output settings &lt;code&gt;enable SRT&lt;/code&gt;, set the &lt;code&gt;Hostname&lt;/code&gt; to the Dolby.io Millicast endpoint and the &lt;code&gt;Port&lt;/code&gt; to the appropriate port (typically 10,000). Additionally, include the &lt;code&gt;Stream ID&lt;/code&gt; and make sure the Quality settings match &lt;a href="https://dolby.io/blog/broadcasting-srt-streams-with-dolby-io/#srt-limits" rel="noopener noreferrer"&gt;the limitations of Dolby.io SRT streaming&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fsettings.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fsettings.png" alt="Pictured is a screenshot of the vMix Output 1 Outpub Settings with Enable SRT, Hostname, Port, StreamID, and Quality all highlighted in red boxes denoting their importance for creating a successful SRT stream." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When creating the SRT stream define Hostname, port, Stream ID, and Quality.&lt;/p&gt;

&lt;p&gt;8. Press &lt;code&gt;OK&lt;/code&gt; and exit settings. You are now all set up to stream with vMix. When streaming, your SRT stream will be delivered to the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Streaming Viewer, which can be found at the Hosted Player Path.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fhosted-player-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fhosted-player-1.png" alt="Pictured is a screenshot of the Doby.io Streaming Token API tab, with hosted player path highlighted in a red box. " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Dolby.io Streaming Token API tab, with hosted player path highlighted. Opening this path in a browser will launch the stream.&lt;/p&gt;

&lt;p&gt;Although the hosted player path is a great way to view the stream, you can use the &lt;a href="https://dolby.io/blog/building-a-low-latency-livestream-viewer-with-webrtc-millicast/" rel="noopener noreferrer"&gt;Dolby.io Streaming JavaScript SDK&lt;/a&gt; to build out a bespoke solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming SRT with your iPhone
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://softvelum.com/larix/" rel="noopener noreferrer"&gt;Softvelum's Larix Broadcaster&lt;/a&gt; is a tool available for iOS, Android, and React Native that allows you to push SRT streams directly from your mobile device. To set up a Larix SRT stream on an iOS device:&lt;/p&gt;

&lt;p&gt;1. &lt;a href="https://dashboard.dolby.io/signin" rel="noopener noreferrer"&gt;Login&lt;/a&gt; or&lt;a href="https://dashboard.dolby.io/signup" rel="noopener noreferrer"&gt; create a Dolby.io account&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;2. Download the Larix Broadcaster from the App Store.&lt;/p&gt;

&lt;p&gt;3. Navigate to your &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; streaming dashboard and create a new token. You can leave all the token settings to default.&lt;/p&gt;

&lt;p&gt;4. Open the API tab on your newly created token dashboard and navigate to the bottom where you'll see the &lt;code&gt;SRT publish path&lt;/code&gt;, the &lt;code&gt;SRT stream ID&lt;/code&gt;, and the &lt;code&gt;SRT publish URL&lt;/code&gt;. Copy the &lt;code&gt;SRT publish path&lt;/code&gt; and the &lt;code&gt;SRT stream ID&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;5. Open the Larix Broadcaster and then &lt;code&gt;Settings&lt;/code&gt;. From &lt;code&gt;Settings&lt;/code&gt;, go to &lt;code&gt;Connections&lt;/code&gt; and add a new connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fiosapp-3.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fiosapp-3.jpeg" alt="Pictured is a screenshot from an iOS device using the Larix Broadcaster with a red box highlighting a plus icon." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a new connection with the plus icon in the top right corner.&lt;/p&gt;

&lt;p&gt;6. Inside the connection, set the &lt;code&gt;URL&lt;/code&gt; parameter to your Dolby.io Real-Time Streaming &lt;code&gt;SRT publish path&lt;/code&gt; and set &lt;code&gt;streamid&lt;/code&gt; to your &lt;code&gt;SRT stream ID.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fiosapp-2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fiosapp-2.jpeg" alt="Pictured is a screenshot of an iOS device using the Larix Broadcaster with a red box around URL and streamid to indicate their importance to starting the srt stream." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When adding a new connection in the Larix Broadcaster make sure to assign "streamid" and "URL".&lt;/p&gt;

&lt;p&gt;7. From here you can exit your settings and start the stream by pressing the record button on the broadcaster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fiosapp-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fiosapp-1.png" alt="Pictured is a screenshot of an iOS device on the Larix Broadcaster screen with the recording button active and stream started. The srt stream itself is of a black screen with no features." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Press the record button on the left to start an SRT stream.&lt;/p&gt;

&lt;p&gt;8. Like the OBS and vMix examples, your SRT stream will be delivered to the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Streaming Viewer, which can be found at the Hosted Player Path.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fhosted-player-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fhosted-player-2.png" alt="Pictured is a screenshot of the Doby.io Streaming Token API tab, with hosted player path highlighted in a red box. " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Dolby.io Streaming Token API tab, with hosted player path highlighted. Opening this path in a browser will launch the stream.&lt;/p&gt;

&lt;p&gt;Dolby.io Real-time Streaming supports a number of SDKs for creating viewer apps &lt;a href="https://dolby.io/blog/building-a-real-time-streaming-app-with-webrtc-and-flutter-3/" rel="noopener noreferrer"&gt;including a Flutter 3 SDK&lt;/a&gt; for creating viewer apps for Android, iOS, and Web.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming SRT directly from an Osprey Talon Encoder 
&lt;/h2&gt;

&lt;p&gt;OBS, vMix, and Larix Broadcaster are examples of software tools that you can leverage for streaming SRT into the Dolby.io Streaming service, but what about hardware options? Depending on the scale of live production you might have access to cameras with built-in encoders that can directly egress SRT, which we can also connect to the servers. For cameras that don't have built-in encoders, you can connect the camera to an external encoder, some of which support SRT. One example of this is the Osprey Talon 4K-SC, which is not only the first WHIP encoder but can also encode SRT streams that we can connect to the Dolby.io servers. &lt;/p&gt;

&lt;p&gt;1. &lt;a href="https://dashboard.dolby.io/signin" rel="noopener noreferrer"&gt;Login&lt;/a&gt; or&lt;a href="https://dashboard.dolby.io/signup" rel="noopener noreferrer"&gt; create a Dolby.io account&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;2. Connect your Osprey Encoder to your camera and power it up.&lt;/p&gt;

&lt;p&gt;3. Download the &lt;a href="https://www.ospreyvideo.com/_files/ugd/2c643c_d4f522d8a6244a6994f12de0f40721b8.pdf" rel="noopener noreferrer"&gt;Osprey BOSS PRO application&lt;/a&gt;, which will allow you to discover the encoder on your local network. Alternatively, &lt;a href="https://www.ospreyvideo.com/_files/ugd/2c643c_d4f522d8a6244a6994f12de0f40721b8.pdf" rel="noopener noreferrer"&gt;follow this in-depth guide by the Osprey team&lt;/a&gt; for setting up your encoder.&lt;/p&gt;

&lt;p&gt;4. Click on the appropriate encoder, launch the web interface and sign in. Information regarding signing into Osprey equipment &lt;a href="https://www.ospreyvideo.com/_files/ugd/2c643c_d4f522d8a6244a6994f12de0f40721b8.pdf" rel="noopener noreferrer"&gt;can be found here&lt;/a&gt;. Once signed in you will now be in the Osprey Dashboard.&lt;/p&gt;

&lt;p&gt;5. Navigate to your &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; streaming dashboard and create a new token. You can leave all the token settings to default.&lt;/p&gt;

&lt;p&gt;6. Open the API tab on your newly created token dashboard and navigate to the bottom where you'll see the &lt;code&gt;SRT publish path&lt;/code&gt;, the &lt;code&gt;SRT stream ID&lt;/code&gt;, and the &lt;code&gt;SRT publish URL&lt;/code&gt;. Copy the &lt;code&gt;SRT publish path&lt;/code&gt; and the &lt;code&gt;SRT stream ID&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fsrtbox-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fsrtbox-1.png" alt="Pictured is a screenshot of Dolby.io Streaming Token API tab. Highlighted on screen in a red box is the SRT publish URL used in OBS." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Dolby.io Streaming Token API tab. Highlighted box indicates the SRT publish URL used in OBS.&lt;/p&gt;

&lt;p&gt;7. Inside the Osprey Dashboard, set &lt;code&gt;SRT Dest Address&lt;/code&gt; to the &lt;code&gt;SRT publish path&lt;/code&gt; excluding the port. Set &lt;code&gt;SRT Port&lt;/code&gt; to the port number at the end of your &lt;code&gt;SRT publish path&lt;/code&gt; (usually 10000) and set &lt;code&gt;SRT Stream ID &lt;/code&gt;to your &lt;code&gt;SRT Stream ID. &lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fosprey.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdolby.io%2Fwp-content%2Fuploads%2F2022%2F12%2Fosprey.png" alt="Pictured on screen is a screenshot of the black and grey Osprey settings board with SRT Dest Address, SRT Port, and SRT Stream ID highlighted in red to indicate where users should input credentials to start an srt stream through the dolby.io servers." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set the SRT Dest Address to the SRT publish path, and SRT Port to 10000, and the SRT Stream ID to your Dolby.io Streaming Token Stream ID.&lt;/p&gt;

&lt;p&gt;8. From here press start and the encoder will begin streaming content through the Dolby.io servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations of Publishing SRT Streams to &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Streaming SRT is just one part of the equation, &lt;a href="https://dolby.io/products/real-time-streaming/" rel="noopener noreferrer"&gt;Dolby.io Real-time Streaming&lt;/a&gt; also supports &lt;a href="https://docs.dolby.io/streaming-apis/docs/client-sdks" rel="noopener noreferrer"&gt;a number of SDKs&lt;/a&gt; for building streaming into your platforms and apps. If you are interested in learning more about how to use our SDKs &lt;a href="https://dolby.io/blog/" rel="noopener noreferrer"&gt;check out our blog&lt;/a&gt; and let us know what you're building next.\&lt;br&gt;
Feedback or Questions? Reach out to the team on &lt;a href="https://twitter.com/DolbyIO?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/dolbyio/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, or via our &lt;a href="https://www.millicast.com/contactus/" rel="noopener noreferrer"&gt;support desk&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>api</category>
      <category>frontend</category>
      <category>programming</category>
    </item>
    <item>
      <title>Building a Livestream App with Flutter 3</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Mon, 31 Oct 2022 20:19:09 +0000</pubDate>
      <link>https://dev.to/dolbyio/building-a-real-time-streaming-app-with-webrtc-and-flutter-3-2ghj</link>
      <guid>https://dev.to/dolbyio/building-a-real-time-streaming-app-with-webrtc-and-flutter-3-2ghj</guid>
      <description>&lt;p&gt;&lt;a href="https://dolby.io/blog/building-a-real-time-streaming-app-with-webrtc-and-flutter-3/" rel="noopener noreferrer"&gt;Originally published here.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Streaming, especially the low latency kind, has become a popular medium to engage with an audience, &lt;a href="https://dolby.io/solutions/events/" rel="noopener noreferrer"&gt;host live events&lt;/a&gt;, and connect people virtually. For developers building streaming apps, however, there is just one issue. If we are interested in connecting to a wide audience we need to develop for a wide range of platforms such as Android, iOS, Web, and even desktop native apps, which can quickly become a heavy lift for any team. This is where &lt;a href="https://flutter.dev/?gclid=CjwKCAjw4c-ZBhAEEiwAZ105RYihY2PWVmum6IojgwCKgGWKZg9IOYmyhWlapji_zIYo_FpW-vW8tRoCoKcQAvD_BwE&amp;amp;gclsrc=aw.ds" rel="noopener noreferrer"&gt;Flutter 3&lt;/a&gt; comes in, released in May of 2022, Flutter 3 takes cross-platform to the next level allowing users to "&lt;em&gt;build for any screen&lt;/em&gt;" from a single code base. Hence, rather than building 3 separate apps for iOS, Android, and Web, you can build just one. To further sweeten the deal,&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt; Dolby.io&lt;/a&gt; has recently released their &lt;a href="https://docs.dolby.io/streaming-apis/docs/flutter" rel="noopener noreferrer"&gt;WebRTC real-time streaming SDK for Flutter&lt;/a&gt;, allowing users to &lt;a href="https://dolby.io/products/real-time-streaming/" rel="noopener noreferrer"&gt;build cross-platform streaming apps&lt;/a&gt; that combine scalability and ultra-low delay. &lt;/p&gt;

&lt;p&gt;In this guide, we'll be exploring how to build a cross-platform real-time streaming app that works on Android, iOS, Desktop Native, and Web using the&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io Streaming SDK for Flutter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6gjawzp26fcfved0pha.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6gjawzp26fcfved0pha.jpg" alt="An example of the Flutter real-time streaming app in action, streaming out to a chrome tab." width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with the Real-Time Streaming SDK
&lt;/h2&gt;

&lt;p&gt;Before we begin you need to make sure you have the &lt;a href="https://docs.flutter.dev/get-started/install" rel="noopener noreferrer"&gt;latest version of Flutter installed&lt;/a&gt; and set up on your machine. To get started with building a streaming app we need to install the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Streaming SDK for Flutter 3 via the terminal.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flutter pub add millicast_flutter_sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then run the following command in terminal to download the dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flutter pub get
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the Flutter Streaming SDK installed, you can start by creating a &lt;a href="https://docs.flutter.dev/get-started/test-drive?tab=vscode" rel="noopener noreferrer"&gt;vanilla Flutter app&lt;/a&gt; and add the most recent version of &lt;code&gt;flutter_webrtc&lt;/code&gt; to your project's &lt;code&gt;pubspec.yaml. &lt;/code&gt;You should also see that the Dolby.io Millicast flutter SDK has been automatically added.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flutter_webrtc: ^x.x.x
millicast_flutter_sdk: ^x.x.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then inside  &lt;code&gt;main.dart&lt;/code&gt; you just import &lt;code&gt;flutter_webrtc &lt;/code&gt;alongside any other dependencies your project may have.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import 'package:flutter_webrtc/flutter_webrtc.dart';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition to installing the SDK, you'll also need to &lt;a href="https://dashboard.dolby.io/signup/" rel="noopener noreferrer"&gt;create a free &lt;/a&gt;&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt;&lt;a href="https://dashboard.dolby.io/signup/" rel="noopener noreferrer"&gt; Account&lt;/a&gt;. The free account offers 50 Gigabytes of data transfer a month, which will be plenty for building and testing out the real-time streaming app.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Interested in following along with a project that already has the SDK installed and set up? &lt;a href="https://github.com/dolbyio-samples/blog-streaming-flutter-app/tree/main/streaming_app" rel="noopener noreferrer"&gt;Check out this GitHub repository&lt;/a&gt; which contains a completed version of this app.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Building the Real-Time Streaming App with Flutter
&lt;/h3&gt;

&lt;p&gt;Building a WebRTC Flutter streaming app can be complicated, so to get started we first need to divide the app into a series of features that come together to support a real-time streaming experience. In order for the app to connect to the&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt; Dolby.io&lt;/a&gt; servers, we must include a way for the user to input their streaming credentials and tokens in order to authenticate and use the&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt; Dolby.io&lt;/a&gt; servers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Taking in the WebRTC Stream Credentials
&lt;/h4&gt;

&lt;p&gt;To publish and view a WebRTC stream with the&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt; Dolby.io&lt;/a&gt; Flutter SDK we need three things: an &lt;code&gt;account ID&lt;/code&gt;, a &lt;code&gt;stream name&lt;/code&gt;, and a &lt;code&gt;publishing token&lt;/code&gt;. &lt;a href="https://docs.dolby.io/streaming-apis/docs/about-dash" rel="noopener noreferrer"&gt;These credentials can be found on your Dolby.io dashboard&lt;/a&gt; and need to be input by the user which we can capture with the &lt;code&gt;TextFormField&lt;/code&gt; widget, where the widget, on change, updates a &lt;code&gt;TextEditingController&lt;/code&gt; variable.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Container(
    width: MediaQuery.of(context).size.width,
    constraints: const BoxConstraints(
        minWidth: 100, maxWidth: 400),
    child: TextFormField(
      maxLength: 20,
      controller: accID,
      decoration: const InputDecoration(
        labelText: 'Enter Account ID',
      ),
      onChanged: (v) =&amp;gt; accID.text = v,
    )),
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;em&gt;Note: In production, you don't need to have users input these credentials, instead you could use a custom login and serve the users a temporary login token. For learning more about Dolby.io tokens &lt;a href="https://dolby.io/blog/secure-token-authentication-with-dolby-io-millicast-streaming-webrtc/" rel="noopener noreferrer"&gt;check out this blog on creating and securing tokens&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Because we need three inputs to publish a WebRTC stream to the Dolby.io server, we can repeat this code for each input.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Container(
     width: MediaQuery.of(context).size.width,
     constraints: const BoxConstraints(
         minWidth: 100, maxWidth: 400),
     child: TextFormField(
       maxLength: 20,
       controller: accID,
       decoration: const InputDecoration(
         labelText: 'Enter Account ID',
       ),
       onChanged: (v) =&amp;gt; accID.text = v,
     )),
Container(
     width: MediaQuery.of(context).size.width,
     constraints: const BoxConstraints(
         minWidth: 100, maxWidth: 400),
     child: TextFormField(
       maxLength: 20,
       controller: streamName,
       onChanged: (v) =&amp;gt; streamName.text = v,
       decoration: const InputDecoration(
         labelText: 'Enter Stream Name',
       ),
     )),
 // Publishing Token Input
 Container(
     width: MediaQuery.of(context).size.width,
     constraints: const BoxConstraints(
         minWidth: 100, maxWidth: 400),
     child: TextFormField(
       controller: pubTok,
       maxLength: 100,
       onChanged: (v) =&amp;gt; pubTok.text = v,
       decoration: const InputDecoration(
         labelText: 'Enter Publishing Token',
       ),
     )),
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, we can add an &lt;code&gt;ElevatedButton&lt;/code&gt; for the user to press once they have added their credentials.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ElevatedButton(
  style: ElevatedButton.styleFrom(
    primary: Colors.deepPurple,
  ),
  onPressed: publishExample,
  child: const Text('Start Stream'),
),
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwisdn2ehwr3bq0p269du.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwisdn2ehwr3bq0p269du.jpg" alt="The sample app, on launch, captures the user’s credentials to start the stream." width="800" height="1030"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Authentication and Publishing Streams from Flutter
&lt;/h4&gt;

&lt;p&gt;You'll notice that the Elevated button triggers a function via its &lt;code&gt;onPressed&lt;/code&gt; parameter. This function, called &lt;code&gt;publishExample,&lt;/code&gt; checks if the credentials are valid and authenticates the stream. First, the function checks that a user has input a value for each input.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void publishExample() async {
    if (pubTok.text.isEmpty || streamName.text.isEmpty || accID.text.isEmpty) {
      ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
        backgroundColor: Colors.grey,
        content: Text(
            'Make sure Account ID, Stream Name, and Publishing Token all include values.'),
      ));
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then the function calls &lt;code&gt;publishConnect&lt;/code&gt;, an asynchronous function that takes in &lt;code&gt;streamName,&lt;/code&gt; &lt;code&gt;pubTok&lt;/code&gt;, and a third object called &lt;code&gt;localRenderer&lt;/code&gt;. &lt;code&gt;localRendered&lt;/code&gt; is a &lt;code&gt;RTCVideoRenderer&lt;/code&gt; object included with the &lt;code&gt;flutter.webrtc&lt;/code&gt; package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;final RTCVideoRenderer localRenderer = RTCVideoRenderer();
publish = await publishConnect(localRenderer, streamName.text, pubTok.text);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using these three parameters we have everything we need to authenticate and begin publishing a stream. Inside of the &lt;code&gt;publishConnect&lt;/code&gt; function, we need to generate a temporary publishing token using the &lt;code&gt;streamName&lt;/code&gt; and &lt;code&gt;pubTok&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Future publishConnect(RTCVideoRenderer localRenderer, String streamName, String pubTok) async {
  // Setting subscriber options
  DirectorPublisherOptions directorPublisherOptions =
      DirectorPublisherOptions(token: pubTok, streamName: streamName);

  /// Define callback for generate new token
  tokenGenerator() =&amp;gt; Director.getPublisher(directorPublisherOptions);

...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the temporary publishing token created, we then can use it to create a &lt;code&gt;publish&lt;/code&gt; object. Using this &lt;code&gt;publish&lt;/code&gt; object we could start the stream, however, we wouldn't be able to see or hear anything, this is because we haven't specified what kind of stream we are creating or which devices we will connect to. To do this we need to specify if the stream will include audio, video, or audio &lt;em&gt;and&lt;/em&gt; video, then we need to pass these constraints into the &lt;code&gt;getUserMedia&lt;/code&gt; function which will map the constraints to the default audio capture device and the default video capture device.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
...
Publish publish =
      Publish(streamName: 'your-streamname', tokenGenerator: tokenGenerator);

  final Map&amp;lt;String, dynamic&amp;gt; constraints = &amp;lt;String, bool&amp;gt;{
    'audio': true,
    'video': true
  };

  MediaStream stream = await navigator.mediaDevices.getUserMedia(constraints);

...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Using this stream object, we can also provide a feed to the user in the form of a viewer. To do this we need to assign our input devices to&lt;code&gt; localRender&lt;/code&gt; as sources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
...

localRenderer.srcObject = stream;

...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we can map the &lt;code&gt;stream&lt;/code&gt; object and pass it as an option to the &lt;code&gt;connect&lt;/code&gt; function, which is inherited from &lt;code&gt;publish&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
...
//Publishing Options
  Map&amp;lt;String, dynamic&amp;gt; broadcastOptions = {'mediaStream': stream};

  /// Start connection to publisher
  await publish.connect(options: broadcastOptions);
  return publish;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With our stream connected, we can now look at setting up the viewer using &lt;code&gt;localRender.&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  In-App WebRTC Stream Viewer
&lt;/h4&gt;

&lt;p&gt;Now that our stream is authenticated and publishing we need to add a viewer object so the streamer can see themselves streaming. This can be done with &lt;a href="https://pub.dev/documentation/flutter_webrtc/latest/flutter_webrtc/RTCVideoView-class.html" rel="noopener noreferrer"&gt;a &lt;code&gt;RTCVideoView&lt;/code&gt; object&lt;/a&gt; which takes in our &lt;code&gt;localRender&lt;/code&gt; object and is constrained by a container.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Container(
  margin: const EdgeInsets.all(30),
  constraints: const BoxConstraints(
      minWidth: 100, maxWidth: 1000, maxHeight: 500),
  width: MediaQuery.of(context).size.width,
  height: MediaQuery.of(context).size.height / 1.7,
  decoration:
      const BoxDecoration(color: Colors.black54),
  child: RTCVideoView(localRenderer, mirror: true),
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Sharing the Real-time Stream
&lt;/h4&gt;

&lt;p&gt;With the stream authenticated and live, we want to share our content with the world. We can do this via a URL formatted with our &lt;code&gt;streamName&lt;/code&gt; and our &lt;code&gt;accountID&lt;/code&gt; which we collected as inputs. Using the example app as a template we can create a function called shareStream which formats the URL to share and copies it to the clipboard.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void shareStream() {
    Clipboard.setData(ClipboardData(
        text:
            "https://viewer.millicast.com/?streamId=${accID.text}/${streamName.text}"));
    ScaffoldMessenger.of(context).showSnackBar(const SnackBar(
      backgroundColor: Colors.grey,
      content: Text('Stream link copied to clipboard.'),
    ));
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Unpublishing a WebRTC Stream
&lt;/h4&gt;

&lt;p&gt;To unpublish the stream we can call the publish object returned from our asynchronous &lt;code&gt;publishConnect&lt;/code&gt; function to stop, killing the connection with the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;publish.stop();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Flutter 3 is Truly Cross Platform  
&lt;/h4&gt;

&lt;p&gt;The power of Flutter is taking one code base and having it work across multiple platforms. Here we can see examples of the app working for Android, Windows, and Web:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysoly1b5pkvtndib0hoj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysoly1b5pkvtndib0hoj.jpg" alt="An example of the Flutter real-time streaming app launching on an Android emulator." width="463" height="858"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgvnej5uyreo2073efvp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgvnej5uyreo2073efvp.jpg" alt="An example of the Flutter real-time streaming app launching as a web app." width="800" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4a6reoog8tnjzbo6qzyh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4a6reoog8tnjzbo6qzyh.jpg" alt="An example of the Flutter real-time streaming app launching as a Windows Native app." width="800" height="1030"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building in this cross-platform framework saves both time and resources, allowing you to get started building real-time streaming apps without having to worry about which platform works for your users. These apps are perfect for streaming live events and virtual events to the widest range of audiences allowing for high-quality interactive experiences. If you are interested in learning more about our Flutter streaming SDK &lt;a href="https://docs.dolby.io/streaming-apis/docs/flutter" rel="noopener noreferrer"&gt;check out our documentation&lt;/a&gt; and play around with the full project on&lt;a href="https://github.com/dolbyio-samples/blog-streaming-flutter-app/tree/main/streaming_app" rel="noopener noreferrer"&gt; this GitHub repository&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Feedback or Questions? Reach out to the team on &lt;a href="https://twitter.com/DolbyIO?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/company/dolbyio/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, or via our &lt;a href="https://www.millicast.com/contactus/" rel="noopener noreferrer"&gt;support desk&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>flutter</category>
      <category>webdev</category>
      <category>api</category>
      <category>android</category>
    </item>
    <item>
      <title>Installing Unreal Engine Plugins from GitHub or Source Code</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Tue, 26 Jul 2022 17:14:00 +0000</pubDate>
      <link>https://dev.to/dolbyio/installing-unreal-engine-plugins-from-github-or-source-code-4dhb</link>
      <guid>https://dev.to/dolbyio/installing-unreal-engine-plugins-from-github-or-source-code-4dhb</guid>
      <description>&lt;p&gt;Whether you are just getting started with the Unreal Engine or have been using it for years, Plugins are a great way to enhance your creation and reduce the development time across a project. Although many useful plugins come preinstalled on your system, your project may require more bespoke and advanced tools that aren’t included by default and need to be installed from a 3rd party host like GitHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to install 3rd Party Plugins from GitHub or Source Code
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;To get started make sure you have the &lt;a href="https://docs.unrealengine.com/5.0/en-US/installing-unreal-engine/" rel="noopener noreferrer"&gt;Unreal Engine installed&lt;/a&gt; along with Visual Studio. Make sure that you also have &lt;a href="https://docs.unrealengine.com/4.26/en-US/ProductionPipelines/DevelopmentSetup/VisualStudioSetup/" rel="noopener noreferrer"&gt;Visual Studio set up for game development with C++&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Create a project within the Unreal Engine editor, then save and close the program.&lt;/li&gt;
&lt;li&gt;Locate the directory of your newly created project. For example &lt;em&gt;C:\Users\User\Unreal Engine\MyProject&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Create a “Plugins” folder and place the source code into the new directory.  For example &lt;em&gt;C:\Users\User\Unreal Engine\MyProject\Plugins&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Navigate back to the top level of your project and right-click on the &lt;em&gt;.uproject&lt;/em&gt; file. Select Generate Visual Studio Project.&lt;/li&gt;
&lt;li&gt;Reopen the project with the Unreal Engine editor and Navigate to &lt;em&gt;Edit&amp;gt;Plugins&lt;/em&gt; and enable (if already enabled don’t worry) your new plugin. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From there the engine should restart and you should have your newly installed plugin ready to use. If you are interested in an example of a plugin your might want to install from a 3rd party, &lt;a href="https://dolby.io/blog/using-webrtc-plugins-to-build-a-scalable-unreal-engine-5-streaming-experience/" rel="noopener noreferrer"&gt;check out the Dolby.io Millicast WebRTC plugin for creating WebRTC pixel streams.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ue5</category>
      <category>gamedev</category>
      <category>github</category>
      <category>webrtc</category>
    </item>
    <item>
      <title>Building a Livestream Platform</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Thu, 30 Jun 2022 15:58:12 +0000</pubDate>
      <link>https://dev.to/dolbyio/building-a-livestream-platform-4253</link>
      <guid>https://dev.to/dolbyio/building-a-livestream-platform-4253</guid>
      <description>&lt;p&gt;With competition rising in the live streaming space, companies are battling it out to offer more compelling streaming experiences, whether that is for live events, e-learning, or remote post-production. In order to stand out, solutions must balance offering high-quality audio and video in addition to offering optimal content delivery speeds, leading platforms such as Twitch and YouTube Live to sacrifice real-time (10-15 seconds of latency) in favor of quality. But what if you didn't have to sacrifice at all, and could have both quality and speed? In this guide, we'll explore building a webRTC low latency streaming platform capable of delivering 4k streams with just a few lines of JavaScript.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;Building a Low Latency Livestreaming platform with Dolby.io Millicast and JavaScript.&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;To get started building your live stream platform, we first have to set up a free&lt;a href="https://dash.millicast.com/#/signup?planId=28" rel="noopener noreferrer"&gt; Dolby.io Millicast account here&lt;/a&gt;. Dolby.io Millicast is an&lt;a href="https://millicast.com/what-is-low-latency-streaming/" rel="noopener noreferrer"&gt; ultra-low latency platform&lt;/a&gt; that provides APIs, SDKs, and integrations for content delivery with a delay of 500 milliseconds or less to anywhere in the world using a technology called WebRTC. The free account is hard-capped at 50 Gigabytes of data transfer a month, which will be plenty for building and testing your JavaScript Livestream platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Millicast webRTC Javascript SDK
&lt;/h2&gt;

&lt;p&gt;To get started, clone the&lt;a href="https://github.com/dolbyio-samples/blog-millicast-livestream-viewer" rel="noopener noreferrer"&gt; GitHub repo, which contains an example app&lt;/a&gt; demonstrating how to implement everything you'll need for building a Millicast WebRTC Livestream viewing platform. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmx2d04knt7omj0nxwtrh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmx2d04knt7omj0nxwtrh.png" alt="The landing page for the Livestream viewer where users are required to input the stream name and account ID. Build a quality ultra low latency livestream platform that can support hundreds of thousands of viewers with just a few lines of JavaScript by leveraging the power of WebRTC and Dolby.io Millicast." width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The landing page for the Livestream viewer where users are required to input the stream name and account ID.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fg9wmkuuuyguwjxypqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4fg9wmkuuuyguwjxypqt.png" alt="Once users log into the Livestream viewer they connect to a low-latency webRTC stream powered by Dolby.io Millicast. Build a quality ultra low latency livestream platform that can support hundreds of thousands of viewers with just a few lines of JavaScript by leveraging the power of WebRTC and Dolby.io Millicast." width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once users log into the Livestream viewer they connect to a low-latency webRTC stream powered by Dolby.io Millicast.&lt;/p&gt;

&lt;p&gt;To test out this app, we need to&lt;a href="https://dash.millicast.com/#/tokens" rel="noopener noreferrer"&gt; navigate to the dashboard&lt;/a&gt; of your newly created Millicast account. There you'll see a header, *Stream Tokens, *which includes a list of your streaming tokens. If you've just created an account you'll only have one token, which you can click to open its settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wm8l0ws1ge1ahygn5eu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wm8l0ws1ge1ahygn5eu.png" alt="The Millicast dashboard landing page. Build a quality ultra low latency livestream platform that can support hundreds of thousands of viewers with just a few lines of JavaScript by leveraging the power of WebRTC and Dolby.io Millicast." width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Millicast dashboard landing page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzi0et0ixyzgq6xnbbxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzi0et0ixyzgq6xnbbxp.png" alt="Clicking on the token name brings you into the token settings where you can create a stream. Build a quality ultra low latency livestream platform that can support hundreds of thousands of viewers with just a few lines of JavaScript by leveraging the power of WebRTC and Dolby.io Millicast." width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on the token name brings you into the token settings where you can create a stream.&lt;/p&gt;

&lt;p&gt;From &lt;em&gt;Settings&lt;/em&gt; switch from &lt;em&gt;Token Details&lt;/em&gt; to the &lt;em&gt;API&lt;/em&gt; tab. In this tab, you'll see lots of different tokens, endpoints, and information. We can disregard most of the info for this app and instead just copy the Account ID value and the Stream Name.  Next, switch back to the &lt;em&gt;Token Details&lt;/em&gt; tab and click on the bright green &lt;em&gt;Broadcast button&lt;/em&gt;. This will launch a WebRTC streaming tool where you can adjust settings and start streaming, although no one will be able to join when you do until you have a working stream viewer. For now, click on the start stream button to begin a Livestream. Finally, we can launch our cloned sample app by either using the &lt;a href="https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer" rel="noopener noreferrer"&gt;Live Server extension&lt;/a&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer" rel="noopener noreferrer"&gt; for VS Code&lt;/a&gt; or just opening the *index.html *file in browser. With the sample app launched, you can connect to your Livestream by entering the Account ID and the Livestream Name.&lt;/p&gt;

&lt;p&gt;Note: &lt;a href="https://docs.dolby.io/communications-apis/docs/guides-security" rel="noopener noreferrer"&gt;it's important to follow the best security practices when sharing or exposing any IDs or authentication parameters on the web.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To Summarize&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://github.com/dolbyio-samples/blog-millicast-livestream-viewer" rel="noopener noreferrer"&gt;Clone the Livestream sample project.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  From your Millicast account locate your Stream Token Settings and get your Account ID and Livestream Name from the API tab.&lt;/li&gt;
&lt;li&gt;  Start broadcasting a Livestream from the Millicast dashboard.&lt;/li&gt;
&lt;li&gt;  Launch your sample app by opening index.html in browser.&lt;/li&gt;
&lt;li&gt;  Enter the Account ID and Livestream Name to view your stream.&lt;/li&gt;
&lt;li&gt;  Watch your Livestream!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's cool about this sample app is, although it is simple to understand, the app could be hosted on a server and shared with anyone across the web, allowing them to watch what you're streaming from the Millicast dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  So How Does the Millicast Livestream Viewer Work?
&lt;/h2&gt;

&lt;p&gt;Millicast supports a JavaScript SDK which you can utilize as either an &lt;a href="https://www.npmjs.com/package/@millicast/sdk" rel="noopener noreferrer"&gt;NPM package&lt;/a&gt; or with the &lt;a href="https://www.jsdelivr.com/package/npm/@millicast/sdk" rel="noopener noreferrer"&gt;JSDELIVR content delivery network (CDN)&lt;/a&gt;. For this guide, we will be using the CDN option which we import into the head of our HTML file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;script src="https://cdn.jsdelivr.net/npm/@millicast/sdk@latest/dist/millicast.umd.js"&amp;gt;&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This import allows us to utilize the tools offered in the SDK. We also need to include a &lt;em&gt;video *tag and another script tag linking our HTML file to &lt;/em&gt;&lt;code&gt;millicast_viewer.js&lt;/code&gt;*, otherwise, the rest of the HTML file is set dressing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;video width="640" height="360" hidden="True" id="videoPlayer" controls class="vidBox"&amp;gt;&amp;lt;/video&amp;gt;
&amp;lt;script src="src/millicast_viewer.js"&amp;gt;&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the Millicast SDK imported, our video player placed, and our HTML and JavaScript files linked, we can change over to &lt;em&gt;&lt;code&gt;millicast_viewer.js&lt;/code&gt;&lt;/em&gt; to learn about how we authenticate and connect to a Livestream.&lt;/p&gt;

&lt;p&gt;There are three main steps for authenticating and connecting to a Millicast broadcast inside of &lt;em&gt;&lt;code&gt;millicast_viewer.js&lt;/code&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step #1: Token Generator
&lt;/h4&gt;

&lt;p&gt;The first step in connecting to a Millicast stream is to define our token generator. Since we imported the SDK via a CDN we need to preface Millicast SDK functions with&lt;code&gt; window.millicast,&lt;/code&gt; then the function, in this case, &lt;code&gt;Director.getSubscriber&lt;/code&gt;. Additionally, for the token generator, we also have to include the Stream Name and the Account ID. In our example we utilize user input to get these values, however, there are a variety of &lt;a href="https://docs.millicast.com/docs/managing-your-tokens" rel="noopener noreferrer"&gt;different options for generating a Millicast token&lt;/a&gt;. Using our token generator, we can create a &lt;em&gt;Millicast View&lt;/em&gt; option which we will use to connect to the broadcast.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const tokenGenerator = () =&amp;gt;
        window.millicast.Director.getSubscriber({
            streamName: streamName,
            streamAccountId: accID,
        });
const millicastView = new window.millicast.View(streamName, tokenGenerator);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step #2: Adding the Stream to Video
&lt;/h4&gt;

&lt;p&gt;Next, we need to add the stream to our video tag, defined in the HTML. To do this we define a function called &lt;em&gt;addStreamToYourVideoTag&lt;/em&gt; and listen for a track event, such as a broadcast updating, to add to our video tag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;millicastView.on("track", (event) =&amp;gt; {
        addStreamToYourVideoTag(event.streams[0]);
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As for our function &lt;code&gt;addStreamToYourVideoTag&lt;/code&gt;, it just takes in the stream element and sets our &lt;code&gt;&amp;lt;video&amp;gt;&lt;/code&gt; tag's &lt;code&gt;srcObject&lt;/code&gt; equal to it.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function addStreamToYourVideoTag(elem) {
    //Adds Stream to the &amp;lt;video&amp;gt; tag.
    let video = document.getElementById("videoPlayer");
    video.srcObject = elem;
    video.autoplay = true;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step #3: Connecting to the Stream
&lt;/h4&gt;

&lt;p&gt;The final step in building our Livestream viewer is to specify a few stream settings and connect to the broadcast. First, we define an &lt;code&gt;options&lt;/code&gt; object which contains three main parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;disableVideo&lt;/code&gt;: set to false because we want video-enabled.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;disableAudio&lt;/code&gt;: set to false because we want audio enabled.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;bandwidth&lt;/code&gt;: used to set data transfer limits, however, we set it to zero in this case which means unlimited bandwidth, since we are only testing the app.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const options = {
        disableVideo: false,
        disableAudio: false,
        bandwidth: 0,
    };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we can call the connect function using our &lt;code&gt;millicastView&lt;/code&gt; object and our &lt;code&gt;options&lt;/code&gt; object. We surround this function with a &lt;code&gt;try-catch&lt;/code&gt; statement in case bandwidth issues prevent the broadcast from connecting immediately.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;try {
        await millicastView.connect(options);
    } catch (e) {
        console.log("Connection failed, handle error", e);
        millicastView.reconnect();
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the connect step complete, we are now able to join a Millicast broadcast. We can see all these steps &lt;a href="https://github.com/dolbyio-samples/blog-millicast-livestream-viewer/blob/main/src/millicast_viewer.js" rel="noopener noreferrer"&gt;put together here&lt;/a&gt; and the result is a Livestream viewing platform powered by WebRTC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48u32rluh7fnguffw6rn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48u32rluh7fnguffw6rn.png" alt="Once users log into the Livestream viewer they connect to a low-latency webRTC stream powered by Dolby.io Millicast." width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once users log into the Livestream viewer they connect to a low-latency webRTC stream powered by Dolby.io Millicast.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts on Building the Millicast Livestream App
&lt;/h2&gt;

&lt;p&gt;In this guide, we've outlined the fundamentals for building a low latency Livestream viewer in JavaScript that's powered by WebRTC. If you've been following along with the sample project code you'll notice a few extra parts and tools. For the most part, the additional code serves as set dressing, helping make the app look and function in a way you might expect a Livestream viewer to function, however, it is not core to connecting to a Millicast broadcast. As you play around and build with the SDK more you might find better ways to create and style your app for your personal goals so don't be afraid to play around with the code a bit. If you have any questions or are interested in&lt;a href="https://www.millicast.com/contactus/" rel="noopener noreferrer"&gt; learning more about Dolby.io Millicast feel free to reach out to our support&lt;/a&gt; or check out&lt;a href="https://docs.millicast.com/docs/web-draft" rel="noopener noreferrer"&gt; our documentation guides&lt;/a&gt;, which include many more SDKs and tools such as a&lt;a href="https://docs.millicast.com/docs/using-obs-with-millicast" rel="noopener noreferrer"&gt; Millicast OBS integration&lt;/a&gt; or an&lt;a href="https://docs.millicast.com/docs/millicast-player-plugin" rel="noopener noreferrer"&gt; Unreal Engine Plugin&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>3 Things To Know Before Building with PyScript</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Thu, 26 May 2022 16:57:13 +0000</pubDate>
      <link>https://dev.to/dolbyio/3-things-to-know-before-building-with-pyscript-4f6n</link>
      <guid>https://dev.to/dolbyio/3-things-to-know-before-building-with-pyscript-4f6n</guid>
      <description>&lt;p&gt;For anyone who hasn't already heard &lt;a href="https://pyscript.net/" rel="noopener noreferrer"&gt;PyScript&lt;/a&gt;, which debuted at &lt;a href="https://us.pycon.org/2022/" rel="noopener noreferrer"&gt;PyCon 2022&lt;/a&gt;, is a browser-embedded python environment, built on top of an existing project called&lt;a href="https://pyodide.org/en/stable/" rel="noopener noreferrer"&gt; Pyodide&lt;/a&gt;. This project, to the shock of long-term Pythonistas and web developers, seamlessly blends (&lt;em&gt;well almost&lt;/em&gt;) JavaScript and Python in a bi-directional environment allowing developers to utilize Python staples such as &lt;a href="https://numpy.org/" rel="noopener noreferrer"&gt;NumPy&lt;/a&gt; or &lt;a href="https://pandas.pydata.org/" rel="noopener noreferrer"&gt;Pandas&lt;/a&gt; in the browser.&lt;/p&gt;

&lt;p&gt;After playing with the project for a few days I wanted to share a few learnings and gotchya's that tripped me up on my journey to master PyScript.&lt;/p&gt;

&lt;p&gt;Prelude: A Crash Course in PyScript&lt;br&gt;
1. Package Indentation Matters!&lt;br&gt;
2. Local File Access&lt;br&gt;
3. DOM Manipulation&lt;/p&gt;
&lt;h2&gt;
  
  
  A Crash Course in PyScript
&lt;/h2&gt;



&lt;p&gt;To get started using PyScript we first have to link our HTML file with the PyScript script as we would for any ordinary javascript file. Additionally, we can link the PyScript style sheet to improve usability.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;head&amp;gt;
    &amp;lt;link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" /&amp;gt;
    &amp;lt;script defer src="https://pyscript.net/alpha/pyscript.js"&amp;gt;&amp;lt;/script&amp;gt;
&amp;lt;/head&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With PyScript imported in the head of our HTML file, we can now utilize the  tag in the body of our HTML to write python code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;body&amp;gt;
    &amp;lt;py-script&amp;gt;
        for i in ["Python", "in", "html?"]:
            print(i)
    &amp;lt;/py-script&amp;gt;
&amp;lt;/body&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yep! It really is just that simple to get started. Now, where do things get tricky?&lt;/p&gt;

&lt;h2&gt;
  
  
  Package Indentation Matters
&lt;/h2&gt;




&lt;p&gt;One of the big advantages of using PyScript is the ability to import Python libraries such as NumPy or Pandas, which is first done in the &lt;em&gt;Head&lt;/em&gt; using the &lt;em&gt;&lt;/em&gt; tag and then inside of the &lt;em&gt;&lt;/em&gt; tag just like you would in regular Python.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;head&amp;gt;
    &amp;lt;link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" /&amp;gt;
    &amp;lt;script defer src="https://pyscript.net/alpha/pyscript.js"&amp;gt;&amp;lt;/script&amp;gt; &amp;lt;py-env&amp;gt;
- numpy
- pandas
    &amp;lt;/py-env&amp;gt;
&amp;lt;/head&amp;gt;&amp;lt;body&amp;gt;
    &amp;lt;py-script&amp;gt;
        import pandas as pd
    &amp;lt;/py-script&amp;gt;
&amp;lt;/body&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the surface, this may seem straightforward but note the indentation of the packages within &lt;em&gt;&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; &amp;lt;py-env&amp;gt;
- numpy
- pandas
    &amp;lt;/py-env&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Turns out that if there is &lt;a href="https://github.com/pyscript/pyscript/issues/136" rel="noopener noreferrer"&gt;any indentation&lt;/a&gt; you'll receive a &lt;em&gt;ModuleNotFoundError&lt;/em&gt;&lt;em&gt;: No module named 'pandas' *or*ModuleNotFoundError&lt;/em&gt;*: No module named 'numpy' ) *for PyScript. This error caught me off guard initially since indentation in Python is so important.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local File Access
&lt;/h2&gt;




&lt;p&gt;JavaScript handles file access very differently compared to Python... as it should given the relationship between web development and privacy and security. Hence Vanilla JavaScript does not have direct access to local files. Since the PyScript project is built on top of JavaScript your Python code won't be able to access local files like you might be used to.&lt;/p&gt;

&lt;p&gt;PyScript does offer a solution to file access in the  tag. In addition to importing packages, you can also import files such as CSVs or XLSXs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; &amp;lt;py-env&amp;gt;
- numpy
- pandas
- paths:
    - /views.csv
    &amp;lt;/py-env&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again note the indentation as in this case the CSV must be indented in relation to the paths.&lt;/p&gt;

&lt;p&gt;With the file included in the path, you can read it within your  code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;py-script&amp;gt;
    import pandas as pd
    df = pd.read_csv("views.csv")
&amp;lt;/py-script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DOM Manipulation
&lt;/h2&gt;




&lt;p&gt;For anyone who has worked in web development, you should be familiar with the DOM or Document Object Model. DOM Manipulation is common in most web applications as developers typically want their websites to interact with users, reading inputs and responding to button clicks. In the case of PyScript this raises an interesting question how do buttons and input fields interact with the Python code?&lt;/p&gt;

&lt;p&gt;Again PyScript has a solution to this, however, it mightn't be what you expect. Here are a few (of many) examples where PyScript has functionality:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; For buttons, you can include &lt;em&gt;pys-onClick="your_function"&lt;/em&gt; parameter to trigger python functions when clicked.&lt;/li&gt;
&lt;li&gt; For retrieving user input from within the &lt;em&gt;&lt;/em&gt; tag *document.getElementById('input_obj_id').value *can retrieve the input value.&lt;/li&gt;
&lt;li&gt; And Finally &lt;em&gt;pyscript.write("output_obj_id", data) *can write output to a tag from within the &lt;/em&gt;* tag.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We can see these three DOM manipulation techniques put together into one web application that lets users check if a CSV has been added to the PyScript path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;body&amp;gt;
   &amp;lt;form onsubmit = 'return false'&amp;gt;
   &amp;lt;label for="fpath"&amp;gt;filepath&amp;lt;/label&amp;gt;
   &amp;lt;input type="text" id="fpath" name="filepath" placeholder="Your name.."&amp;gt;
   &amp;lt;input pys-onClick="onSub" type="submit" id="btn-form" value="submit"&amp;gt;
    &amp;lt;/form&amp;gt;&amp;lt;div id="outp"&amp;gt;&amp;lt;/div&amp;gt; &amp;lt;py-script&amp;gt;
        import pandas as pd def onSub(*args, **kwargs):
            file_path = document.getElementById('fpath').value
            df = pd.read_csv(file_path)
            pyscript.write("outp",df.head())
    &amp;lt;/py-script&amp;gt;
&amp;lt;/body&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These examples aren't comprehensive as the project also supports &lt;a href="https://github.com/pyscript/pyscript/blob/main/docs/tutorials/getting-started.md" rel="noopener noreferrer"&gt;visual component tags&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;




&lt;p&gt;PyScript is a wonderful step in the right direction for bringing some excellent Python packages into the web development space. With that said it still has a bit of growing to do and there are many improvements that need to be made before the project sees widespread adoption.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Show some support to the team working on this awesome project: &lt;a href="https://github.com/pyscript" rel="noopener noreferrer"&gt;https://github.com/pyscript&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Leave a comment with any other insights or gotchya's that you might have experienced working with PyScript and I'll make a part 2.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>python</category>
      <category>javascript</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Searching Media to Find Loudness and Music Sections</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Thu, 19 May 2022 17:01:08 +0000</pubDate>
      <link>https://dev.to/dolbyio/searching-media-to-find-loudness-and-music-sections-21ii</link>
      <guid>https://dev.to/dolbyio/searching-media-to-find-loudness-and-music-sections-21ii</guid>
      <description>&lt;p&gt;Ever since the inception of cinema, &lt;em&gt;Scores&lt;/em&gt;, the musical composition within a film, have become synonymous with the medium and a crucial staple in the experience of enjoying film or TV. As the industry has grown and matured so too has the score, with many productions having hundreds of tracks spanning many genres and artists. These artists can be anyone from an orchestra drummer all the way up to a sellout pop star sensation, each composing, producing, or performing a variety of tracks. The challenge with this growing score complexity is ensuring that every artist is paid for their fair share and contribution to the overall film.&lt;/p&gt;

&lt;p&gt;The industry presently tackles this challenge with a tool known as a "&lt;a href="https://www.bmi.com/creators/detail/what_is_a_cue_sheet" rel="noopener noreferrer"&gt;Cue Sheet&lt;/a&gt;", a spreadsheet that identifies exactly where a track is played and for how long. The issue with Cue Sheets is that their creation and validation is an immensely manual process, constituting hundreds of hours spent confirming that every artist is accounted for and compensations are awarded accordingly. It was this inefficiency that attracted&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt; Dolby.io&lt;/a&gt; to help support the&lt;a href="https://cue-sheet-palooza.devpost.com/" rel="noopener noreferrer"&gt; Cue Sheet Palooza Hackathon&lt;/a&gt;, a Toronto-based event that challenged musicians and software engineers to work and innovate together to reduce the time spent creating Cue Sheets. The event was sponsored by the&lt;a href="https://www.socan.com/" rel="noopener noreferrer"&gt; Society of Composers, Authors and Music Publishers of Canada or SOCAN&lt;/a&gt; for short, which is an organization that helps ensure Composers, Authors, and Music Publishers are correctly compensated for their work. &lt;/p&gt;

&lt;p&gt;Many of the hackers utilized the&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt; Dolby.io&lt;/a&gt;&lt;a href="https://docs.dolby.io/media-apis/docs/analyze-api-guide" rel="noopener noreferrer"&gt; Analyze Media API&lt;/a&gt; to help detect loudness and music within an audio file and timestamp exactly where music is included. In this guide, we will highlight how you can build your own tool for analyzing music content in media, just like the SOCAN hackathon participants.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what is the Analyze Media API?
&lt;/h2&gt;

&lt;p&gt;Before we explained how hackers used the API we need to explain what Analyze Media is and what it does. The &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt;&lt;a href="https://docs.dolby.io/media-apis/docs/analyze-api-guide" rel="noopener noreferrer"&gt; Analyze Media API&lt;/a&gt; generates insight based on the underlying signal in audio for creating data such as loudness, content classification, noise, and musical instruments or genre classification. This makes the API useful for detecting in what section music occurs in a media file and some qualities of the music at that instance.&lt;/p&gt;

&lt;p&gt;The Analyze Media API adheres to the Representational state transfer (REST) protocol meaning that it is language-agnostic and can be built into an existing framework that includes tools to interact with a server. This is useful as it means the API can adapt depending on the use case. In the Cue Sheet example many teams wanted to build a web application as that was what was most accessible to the SOCAN community, and hence relied heavily on HTML, CSS, and JavaScript to build out the tool.&lt;/p&gt;

&lt;p&gt;In this guide, we will be highlighting how the participants implemented the API and why it proved useful for video media. If you want to follow along you can sign up for a free&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt; Dolby.io&lt;/a&gt; account which includes plenty of trial credits for experimenting with the Analyze Media API.&lt;/p&gt;

&lt;h2&gt;
  
  
  A QuickStart with the Analyze Media API in JavaScript:
&lt;/h2&gt;

&lt;p&gt;There are four steps to using the Analyze Media API on media:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Store the media on the cloud.&lt;/li&gt;
&lt;li&gt; Start an Analyze Media job.&lt;/li&gt;
&lt;li&gt; Monitor the status of that job.&lt;/li&gt;
&lt;li&gt; Retrieve the result of a completed job.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first step, storing media on the cloud depends on your use case for the Analyze Media API. If your media/video is already stored on the cloud (Azure AWS, GCP) you can instead move on to &lt;em&gt;step 2&lt;/em&gt;. However, if your media file is stored locally you will first have to upload it to a cloud environment. For this step, we upload the file to the&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt; Dolby.io&lt;/a&gt; Media Cloud Storage using the local file and our&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt; Dolby.io&lt;/a&gt; Media API key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function uploadFile() {
    //Uploads the file to the Dolby.io server
    let fileType = YOUR_FILE_TYPE;
    let audioFile = YOUR_LOCAL_MEDIA_FILE;
    let mAPIKey = YOUR_DOLBYIO_MEDIA_API_KEY;

    let formData = new FormData();
    var xhr = new XMLHttpRequest();
    formData.append(fileType, audioFile);

    const options = {
        method: "POST",
        headers: {
            Accept: "application/json",
            "Content-Type": "application/json",
            "x-api-key": mAPIKey,
        },
        // url is where the file will be stored on the Dolby.io servers.
        body: JSON.stringify({ url: "dlb://file_input.".concat(fileType) }),
    };

    let resp = await fetch("https://api.dolby.com/media/input", options)
        .then((response) =&amp;gt; response.json())
        .catch((err) =&amp;gt; console.error(err));

    xhr.open("PUT", resp.url, true);
    xhr.setRequestHeader("Content-Type", fileType);
    xhr.onload = () =&amp;gt; {
        if (xhr.status === 200) {
            console.log("File Upload Success");
        }
    };
    xhr.onerror = () =&amp;gt; {
        console.log("error");
    };
    xhr.send(formData);
    let rs = xhr.readyState;

    //Check that the job completes
    while (rs != 4) {
        rs = xhr.readyState;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this file upload, we have chosen to use &lt;a href="https://www.sitepoint.com/xmlhttprequest-vs-the-fetch-api-whats-best-for-ajax-in-2019/" rel="noopener noreferrer"&gt;&lt;code&gt;XMLHttpRequest&lt;/code&gt; for handling our client-side file upload&lt;/a&gt;, although packages like &lt;code&gt;Axios&lt;/code&gt; are available. This was a deliberate choice as in our Web App we add functionality for progress tracking and timeouts during our video upload.&lt;/p&gt;

&lt;p&gt;With our media file uploaded and stored on the cloud we can start an Analyze Media API job which is done using the location of our cloud-stored media file. If your file is stored on a cloud storage provider such as AWS you can use the pre-signed URL for the file as the input. In this example, we are using the file stored on&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt; Dolby.io&lt;/a&gt; Media Cloud Storage from &lt;em&gt;step 1&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function startJob() {
    //Starts an Analyze Media Job on the Dolby.io servers
    let mAPIKey = YOUR_DOLBYIO_MEDIA_API_KEY;
    //fileLocation can either be a pre-signed URL to a cloud storage provider or the URL created in step 1.
    let fileLocation = YOUR_CLOUD_STORED_MEDIA_FILE;

    const options = {
        method: "POST",
        headers: {
            Accept: "application/json",
            "Content-Type": "application/json",
            "x-api-key": mAPIKey,
        },
        body: JSON.stringify({
            content: { silence: { threshold: -60, duration: 2 } },
            input: fileLocation,
            output: "dlb://file_output.json", //This is the location we'll grab the result from.
        }),
    };

    let resp = await fetch("https://api.dolby.com/media/analyze", options)
        .then((response) =&amp;gt; response.json())
        .catch((err) =&amp;gt; console.error(err));
    console.log(resp.job_id); //We can use this jobID to check the status of the job
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When &lt;code&gt;startJob&lt;/code&gt; resolves we should see a &lt;code&gt;job_id &lt;/code&gt;returned.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"job_id":"b49955b4-9b64-4d8b-a4c6-2e3550472a33"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we've started an Analyze Media job we need to wait for the job to resolve. Depending on the size of the file the job could take a few minutes to complete and hence requires some kind of progress tracking. We can capture the progress of the job using the &lt;code&gt;JobID&lt;/code&gt; created in &lt;em&gt;step 2&lt;/em&gt;, along with our Media API key to track the progress of the job.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function checkJobStatus() {
    //Checks the status of the created job using the jobID
    let mAPIKey = YOUR_DOLBYIO_MEDIA_API_KEY;
    let jobID = ANALYZE_JOB_ID; //This job ID is output in the previous step when a job is created.

    const options = {
        method: "GET",
        headers: { Accept: "application/json", "x-api-key": mAPIKey },
    };

    let result = await fetch("https://api.dolby.com/media/analyze?job_id=".concat(jobID), options)
        .then((response) =&amp;gt; response.json());
    console.log(result);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;checkJobStatus&lt;/code&gt; function may need to be run multiple times depending on how long it takes for the Analyze Media job to resolve. Each time you query the status you should get results where progress ranges from 0 to 100.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "path": "/media/analyze",
  "status": "Running",
  "progress": 42
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we know the job is complete we can download the resulting JSON which contains all the data and insight generated regarding the input media.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function getResults() {
    //Gets and displays the results of the Analyze job
    let mAPIKey = YOUR_DOLBYIO_MEDIA_API_KEY;

    const options = {
        method: "GET",
        headers: { Accept: "application/octet-stream", "x-api-key": mAPIKey },
    };

    //Fetch from the output.json URL we specified in step 2.
    let json_results = await fetch("https://api.dolby.com/media/output?url=dlb://file_output.json", options)
        .then((response) =&amp;gt; response.json())
        .catch((err) =&amp;gt; console.error(err));

    console.log(json_results)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The resulting output JSON includes music data which breaks down by section. These sections contain an assortment of useful data points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Start (seconds): The starting point of this section.&lt;/li&gt;
&lt;li&gt;  Duration (seconds): The duration of the segment.&lt;/li&gt;
&lt;li&gt;  Loudness (decibels): The intensity of the segment at the threshold of hearing.&lt;/li&gt;
&lt;li&gt;  Beats per minute (bpm): The number of beats per minute and an indicator of tempo.&lt;/li&gt;
&lt;li&gt;  Key: The pitch/scale of the music segment along with a confidence interval of 0.0-1.0.&lt;/li&gt;
&lt;li&gt;  Genre: The distribution of genres including confidence intervals of 0.0-1.0.&lt;/li&gt;
&lt;li&gt;  Instrument: The distribution of instruments including confidence intervals of 0.0-1.0.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Depending on the complexity of the media file there can sometimes be 100s of music segments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"music": {
    "percentage": 34.79,
    "num_sections": 35,
    "sections": [
        {
            "section_id": "mu_1",
            "start": 0.0,
            "duration": 13.44,
            "loudness": -16.56,
            "bpm": 222.22,
            "key": [
                [
                    "Ab major",
                    0.72
                ]
            ],
            "genre": [
                [
                    "hip-hop",
                    0.17
                ],
                [
                    "rock",
                    0.15
                ],
                [
                    "punk",
                    0.13
                ]
            ],
            "instrument": [
                [
                    "vocals",
                    0.17
                ],
                [
                    "guitar",
                    0.2
                ],
                [
                    "drums",
                    0.05
                ],
                [
                    "piano",
                    0.04
                ]
            ]
        },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet of the output only shows the results as they relate to the Cue Sheet use case, the API generates even more data including audio defects, loudness, and content classification. I recommend reading&lt;a href="https://docs.dolby.io/media-apis/docs/analyze-api-guide" rel="noopener noreferrer"&gt; this guide that&lt;/a&gt; explains in-depth the content of the output JSON.&lt;/p&gt;

&lt;p&gt;With the final step resolved we successfully used the Analyze Media API and gained insight into the content of the media file. In the context of the Cue Sheet Palooza Hackathon, the participants were only really interested in the data generated regarding the loudness and music content of the media and hence filtered the JSON to just show the music data similar to the example output.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building an app for creating Cue Sheets
&lt;/h2&gt;

&lt;p&gt;Of course not every musician or composer knows how to program and hence part of the hackathon was building a user interface for SOCAN members to interact with during the Cue Sheet creation process. The resulting apps used a variety of tools including the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; API to format the media content data into a formal Cue Sheet. These web apps took a variety of shapes and sizes with different functionality and complexity. &lt;/p&gt;

&lt;p&gt;It's one thing to show how the Analyze Media API works but it's another thing to highlight how the app might be used in a production environment like for a Cue Sheet. Included &lt;a href="https://github.com/dolbyio-samples/blog-analyze-music-web" rel="noopener noreferrer"&gt;in this repo here is an example&lt;/a&gt; I built using the Analyze Media API that takes a video and decomposes the signal to highlight what parts of the media contain music.\&lt;br&gt;
Here is a picture of the user interface, which takes in your Media API Key and the location of a locally stored media file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgck8128ofu34jxetb45h.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgck8128ofu34jxetb45h.PNG" alt="The starting screen of the Dolby.io Analyze API Music Data Web app found here:https://github.com/dolbyio-samples/blog-analyze-music-web. SOCAN, Loudness, Music, Video, Analyze Media API" width="800" height="414"&gt;&lt;/a&gt;&lt;br&gt;
The starting screen of the Dolby.io Analyze API Music Data Web app, found here:&lt;a href="https://github.com/dolbyio-samples/blog-analyze-music-web" rel="noopener noreferrer"&gt;https://github.com/dolbyio-samples/blog-analyze-music-web&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For showcasing the app I used a downloaded copy of a music review podcast where the host samples a range of songs across a variety of genres. The podcast includes 30 tracks which are played over 40% of the 50-minute podcast.  If you want to try out the App with a song you can use the public domain version of "Take Me Out to the Ball Game" originally recorded in 1908, &lt;a href="https://dolby.io/blog/using-music-mastering-on-take-me-out-to-the-ball-game/" rel="noopener noreferrer"&gt;which I had used for another project relating to music mastering&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F848f3ja4yzugq5f7tiaz.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F848f3ja4yzugq5f7tiaz.PNG" alt="The Dolby.io Analyze API Music Data Web app after running the analysis on a 50-minute music podcast. SOCAN, Loudness, Music, Video, Analyze Media API" width="800" height="428"&gt;&lt;/a&gt;&lt;br&gt;
The Dolby.io Analyze API Music Data Web app after running the analysis on a 50-minute music podcast.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/dolbyio-samples/blog-analyze-music-web" rel="noopener noreferrer"&gt;Feel free to clone the repo and play around with the app yourself.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: 
&lt;/h2&gt;

&lt;p&gt;At the end of the hackathon, participating teams were graded and awarded prizes based on how useful and accessible the Cue Sheet tool would be for SOCAN members. The sample app demoed above represents a very rudimentary version of what many of the hackers built and how they utilized the Analyze Media API. If you are interested in learning more about their projects the winning team included a &lt;a href="https://github.com/rudolfolah/hackathon_cuesheets" rel="noopener noreferrer"&gt;GitHub repo with their winning &lt;/a&gt;entry where you can see how they created a model to recognize music and how they used the Dolby.io Analyze Media API to supplement the Cue Sheet creation process.&lt;/p&gt;

&lt;p&gt;If the Dolby.io Analyze Media API is something you're interested in learning more about check out our&lt;a href="https://docs.dolby.io/media-apis/docs/analyze-api-guide" rel="noopener noreferrer"&gt; documentation&lt;/a&gt; or explore our other tools including APIs for &lt;a href="https://docs.dolby.io/media-apis/docs/music-mastering-api-guide" rel="noopener noreferrer"&gt;Algorithmic Music Mastering&lt;/a&gt;, &lt;a href="https://docs.dolby.io/media-apis/docs/enhance-api-guide" rel="noopener noreferrer"&gt;Enhancing Audio&lt;/a&gt;, and &lt;a href="https://docs.dolby.io/media-apis/docs/transcode-api-guide" rel="noopener noreferrer"&gt;Transcoding Media&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>api</category>
    </item>
    <item>
      <title>Beginner’s Guide to Diagnosing Audio Issues as Part of an Azure Serverless Media Workflow</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Wed, 13 Apr 2022 15:14:48 +0000</pubDate>
      <link>https://dev.to/dolbyio/beginners-guide-to-diagnosing-audio-issues-as-part-of-an-azure-serverless-media-workflow-616</link>
      <guid>https://dev.to/dolbyio/beginners-guide-to-diagnosing-audio-issues-as-part-of-an-azure-serverless-media-workflow-616</guid>
      <description>&lt;p&gt;Transcribing media is a resource-intensive process that is dependent on the quality of the audio and background noise, meaning it can often produce inconsistent results as a product of the media quality. Depending on the scale of the media you are transcribing this process can be both computationally and monetarily expensive, being a costly endeavor especially if the results end up being inaccurate and noisy.  To alleviate some of the risks of transcribing audio that might be low quality, we can instead turn our transcription tool into a workflow that gauges the quality of the audio and only transcribes it if the audio is high enough quality to produce accurate results. Furthermore, because media data can vary dramatically in size we can develop this workflow on the cloud to ensure our processing requirements are dynamically adjusted, and avoid storing large quantities of data among other things.&lt;/p&gt;

&lt;h3&gt;
  
  
  Part #1: Getting your environment ready for Serverless development.
&lt;/h3&gt;

&lt;p&gt;To get started we first need to set up our development environment. For users unfamiliar with Azure services it is worth noting that Azure has a pay-as-you-go policy with some resources costing more than others, so we recommend that you check out the &lt;a href="https://azure.microsoft.com/en-us/pricing/" rel="noopener noreferrer"&gt;pricing guide here&lt;/a&gt; before leveraging any of their services. Follow the steps below to get the appropriate credentials and environment set up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create a free&lt;a href="https://dashboard.dolby.io/signup" rel="noopener noreferrer"&gt; Dolby.io account here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt; Additionally, you'll need to create a&lt;a href="https://azure.microsoft.com/en-us/free/search/?&amp;amp;ef_id=Cj0KCQjwwY-LBhD6ARIsACvT72MxXOuAXYXfdyORORpFI28IKUxat0liWudBjNn-kGScP9BDn0JdopQaAtUFEALw_wcB:G:s&amp;amp;OCID=AID2200277_SEM_Cj0KCQjwwY-LBhD6ARIsACvT72MxXOuAXYXfdyORORpFI28IKUxat0liWudBjNn-kGScP9BDn0JdopQaAtUFEALw_wcB:G:s&amp;amp;gclid=Cj0KCQjwwY-LBhD6ARIsACvT72MxXOuAXYXfdyORORpFI28IKUxat0liWudBjNn-kGScP9BDn0JdopQaAtUFEALw_wcB" rel="noopener noreferrer"&gt; free Azure account&lt;/a&gt;

&lt;ol&gt;
&lt;li&gt; With this free Azure account create a container and a storage blob for your media file&lt;a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal" rel="noopener noreferrer"&gt; here.&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; Also with your free Azure account create a cognitive services account&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows" rel="noopener noreferrer"&gt; here.&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt; To get your Azure environment set up,&lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-python" rel="noopener noreferrer"&gt; Microsoft has a handy guide for building your first project.&lt;/a&gt; Make sure you are using Python 3.9 and Azure Functions Core Tools 3.x.&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;With the environment correctly set up, you can now&lt;a href="https://github.com/dolbyio-samples/media-azure-serverless-workflow" rel="noopener noreferrer"&gt; clone our sample project from GitHub&lt;/a&gt;. The sample project, presented at the&lt;a href="https://channel9.msdn.com/Events/Azure-Serverless/Azure-Serverless-Conf/A-Media-Processing-Workflow-with-Azure-Serverless-with-Braden-Riggs-and-Jayson-DeLancey" rel="noopener noreferrer"&gt; 2021 Azure Serverless Conference&lt;/a&gt;, is a basic example of an asynchronous and fully serverless media to transcription workflow. To best understand how serverless media workflows work we will use this sample project as a template.&lt;/p&gt;

&lt;h3&gt;
  
  
  Part #2: Why Azure Serverless?
&lt;/h3&gt;

&lt;p&gt;Before we dive into the workflow let's briefly outline what Azure Serverless Functions are. &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/" rel="noopener noreferrer"&gt;Serverless functions&lt;/a&gt; are a service provided by Azure that allows for event-triggered code to be run without the need for personal servers or infrastructure. Instead, users can develop code such as a media workflow and allow it to run on the cloud in response to an event. These events are referred to as *Triggers *which are the starting line for any Serverless process. The most basic trigger to imagine is an *HTTP *trigger, which would kickstart a Serverless event if a user navigated to a specific URL from their local machine. &lt;/p&gt;

&lt;p&gt;Serverless functions are perfect for building a media workflow as they can be developed to function asynchronously, meaning multiple jobs can be run at once, with the associated costs scaling according to usage. This means you can build a workflow that you only use once a month or a thousand times a day, only paying for what you use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Part #3: Understanding media workflows with Azure Serverless.
&lt;/h3&gt;

&lt;p&gt;Now that we understand the basics of Azure Serverless Functions let's take a look at the sample project we cloned from GitHub. What does this workflow look like?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0a5cb434mf0snhfdo0k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0a5cb434mf0snhfdo0k.png" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A depiction of the serverless media workflow available at &lt;a href="https://github.com/dolbyio-samples/media-azure-serverless-workflow" rel="noopener noreferrer"&gt;https://github.com/dolbyio-samples/media-azure-serverless-workflow&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As outlined above our workflow must begin with a trigger, in this case, it is a basic *HTTP *trigger. There are two main paths the workflow follows dependent on the quality of the audio in the media provided. In this example, there are three main tools that the workflow relies on to ensure that the media will produce a sufficiently accurate transcription. In the event that the media will not produce an accurate transcription, the audio for the media is cleaned and returned to allow for manual review.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt;&lt;a href="https://docs.dolby.io/media-apis/docs/quick-start-to-diagnosing-media" rel="noopener noreferrer"&gt; Audio Diagnose&lt;/a&gt;: A lightweight audio analysis tool that can return information relating to the audio such as its quality or audio defects. &lt;/li&gt;
&lt;li&gt; &lt;a href="https://azure.microsoft.com/en-us/services/cognitive-services/speech-to-text/" rel="noopener noreferrer"&gt;Azure Cognitive Services Speech-to-Text&lt;/a&gt;: A transcription tool that converts spoken audio to text.&lt;/li&gt;
&lt;li&gt; &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt;&lt;a href="https://docs.dolby.io/media-apis/docs/quick-start-to-enhancing-media" rel="noopener noreferrer"&gt; Enhance&lt;/a&gt;: An audio enhancement tool that can help reduce noise and level speakers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before we test out the sample project let's briefly discuss how we connect each tool together in a serverless fashion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Part #4: Making things asynchronous.
&lt;/h3&gt;

&lt;p&gt;One of the challenges of developing a Serverless media workflow is planning for asynchronous deployment, meaning developing in a way that allows multiple instances of the workflow to run without them interfering or slowing each other down. We already discussed triggers which are great for starting the workflow but what are some other useful tricks we can include to keep things running smoothly. One useful trick is including &lt;em&gt;callbacks *in our API calls. Because we are using REST APIs, the actual processing is done on a different server. In the case of the &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; APIs, this processing is done on the &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Servers, whereas the Speech-to-Text processing is done on the Azure Cognitive Services server. When we send the job to these servers we can include a *callback&lt;/em&gt; parameter that signals where the API should send a request once the job is done. Since we are using *HTTP *triggers, we can specify that the callback directs to the trigger. We don't want the workflow to begin at the start, so when we callback to the *HTTP *trigger we include a &lt;code&gt;job=&lt;/code&gt; tag in the request destination. An example of this process for diagnosis looks can be seen below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests                               #Python module for making http requests
url = "https://api.dolby.com/media/diagnose"  #Location to send the request

body = {"input" : presignedURL,
    'on_complete': {'url': httpTriggerURL + "?job=diagnose_success", "headers": ["x-job-id"]}
}

headers = {"x-api-key":API_KEY,"Content-Type": "application/json","Accept": "application/json", "x-job-id":"True"}
response = requests.post(url, json=body, headers=headers)
response.raise_for_status()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Included in the body of the call is an &lt;code&gt;on_complete&lt;/code&gt; tag which includes the URL location of the trigger, plus a tag informing the function that the next step it should take is evaluating the results of diagnose. This process allows the function to be freed up as soon as it submits a diagnose job, sitting idle until it has to start another diagnose job or until that diagnose job completes. Structured like this the function is able to handle many requests sending media through the workflow. &lt;/p&gt;

&lt;p&gt;The second trick for keeping things asynchronous and efficient is by not moving the media. This means we don't have to change where files are stored, rather we can keep them stored on the Azure cloud in the blob storage and pass directions to each of our tools so they can access where the files are stored. This is done through the use of the pre-signed URLs a useful concept that you can&lt;a href="https://dolby.io/blog/generating-pre-signed-urls-for-azure-cloud-storage-with-python/" rel="noopener noreferrer"&gt; read about in more detail here&lt;/a&gt;. Pre-signed URLs allow the appropriate credentials and access to be passed in one simple URL which the &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; server or the Cognitive Services server can use to find the media. An example of creating this URL can be seen below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from datetime import datetime, timedelta
from azure.storage.blob import generate_blob_sas, BlobSasPermissions

input_sas_blob = generate_blob_sas(account_name= AZURE_ACCOUNT_NAME,
                                    container_name= AZURE_CONTAINER,
                                    blob_name= AZURE_BLOB_INPUT_NAME,
                                    account_key= AZURE_PRIMARY_KEY,

                                    #Since we aren't editing the file read access is sufficient
                                    permission=BlobSasPermissions(read=True),
                                    expiry=datetime.utcnow() + timedelta(hours=5)) #SAS will expire in 5 hours

input_url = 'https://'+AZURE_ACC_NAME+'.blob.core.windows.net/'+AZURE_CONTAINER+'/'+AZURE_BLOB_INPUT_NAME+'?'+input_sas_blob
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;generate_blob_sas&lt;/code&gt; function creates a Shared Access Signature, which when formatted correctly into a URL, creates a direct path to the media. &lt;/p&gt;

&lt;p&gt;The three tools briefly mentioned earlier each are connected to create our workflow.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt;&lt;a href="https://docs.dolby.io/media-apis/docs/quick-start-to-diagnosing-media" rel="noopener noreferrer"&gt; Audio Diagnose&lt;/a&gt;: A lightweight audio analysis tool that can return information relating to the audio such as its quality or audio defects. &lt;/li&gt;
&lt;li&gt; &lt;a href="https://azure.microsoft.com/en-us/services/cognitive-services/speech-to-text/" rel="noopener noreferrer"&gt;Azure Cognitive Services Speech-to-Text&lt;/a&gt;: A transcription tool that converts spoken audio to text.&lt;/li&gt;
&lt;li&gt; &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt;&lt;a href="https://docs.dolby.io/media-apis/docs/quick-start-to-enhancing-media" rel="noopener noreferrer"&gt; Enhance&lt;/a&gt;: An audio enhancement tool that can help reduce noise and level speakers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We start with diagnose, which using the pre-signed URL can access the file stored on the cloud to evaluate its audio quality. It then returns via a callback a score out of 10 with 1 being low-quality audio and 10 being high-quality audio. If the score is at or above our threshold of 7 we will transcribe it, otherwise, we will clean up the audio with enhance for manual review. We settled on this threshold of 7 after some testing as audio that scores above this threshold usually performs the best with Speech-to-Text.&lt;/p&gt;

&lt;p&gt;Now that we understand the workflow, and the pieces that connect to make it work, let's test out the sample project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Part #5: Testing out the workflow. 
&lt;/h3&gt;

&lt;p&gt;With the workflow set up, we can now test it out one of two ways. We recommend testing locally before deploying it with Serverless as when developing locally you can make simple and fast changes to your code without having to worry about uploading it to the Azure servers. Additionally, when developing locally you don't run the risk of exposing API keys. When deploying to the Azure cloud make sure you review the &lt;a href="https://docs.dolby.io/media-apis/docs/authentication" rel="noopener noreferrer"&gt;Authentication Guide&lt;/a&gt; to be sure you are following best practices and protecting your keys.&lt;/p&gt;

&lt;p&gt;Locally:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; To test locally we need to create a HTTP tunnel to port forward to localhost. There are many options available, for this example I used a free &lt;a href="https://ngrok.com/" rel="noopener noreferrer"&gt;ngrok&lt;/a&gt; account. In my case my local function was deployed on 7071, so I initialized ngrok to port forward on LocalHost:7071.&lt;/li&gt;
&lt;li&gt; Once you have launched your HTTP tunnel its time to update the params.json file. Include the correct API keys along with the appropriate names for your Azure account and container want to search for the file in. Additionally you can adjust the score threshold and the output suffix. With our HTTP tunnel launched we also need to update the tunneling URL with the appropriate Forwarding address.&lt;/li&gt;
&lt;li&gt; In Visual Studio Code, press F5 to launch the project and navigate to: &lt;a href="https://localhost:%22YOUR_SERVER%22/api/MediaProcessingWorkflow?input_file=%22YOUR_INPUT_FILE" rel="noopener noreferrer"&gt;https://localhost:"YOUR_SERVER"/api/MediaProcessingWorkflow?input_file="YOUR_INPUT_FILE&lt;/a&gt;"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can monitor your console in Visual Studio for output logs and your container on the Azure portal for the transcribed results.&lt;/p&gt;

&lt;p&gt;Serverless:\&lt;br&gt;
Deploying the functions to an Azure Server is a great choice once the code has been fully debugged and tested, just remember to &lt;a href="https://docs.dolby.io/media-apis/docs/authentication" rel="noopener noreferrer"&gt;protect your API keys&lt;/a&gt; with the &lt;a href="https://docs.microsoft.com/en-us/azure/key-vault/secrets/" rel="noopener noreferrer"&gt;appropriate authentication strategy&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Adjust the params.json file, this time set the tunneling parameter to  https://"YOUR_FUNCTION_APP_NAME".azurewebsites.net/api/"YOUR_FUNCTION_NAME"&lt;/li&gt;
&lt;li&gt; Next deploy the function app through &lt;a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-develop-vs?tabs=in-process" rel="noopener noreferrer"&gt;the Azure extension&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt; Once successfully deployed you can test your function by navigating to: https://"YOUR_FUNCTION_APP_NAME".azurewebsites.net/api/"YOUR_FUNCTION_NAME"?input_file="YOUR_INPUT_FILE"&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Part #6 Building your own workflow.
&lt;/h3&gt;

&lt;p&gt;With the function successfully deployed our transcription workflow tool is complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnuosloeb0engkas3ta3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnuosloeb0engkas3ta3.png" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A depiction of the serverless media workflow available at &lt;a href="https://github.com/dolbyio-samples/media-azure-serverless-workflow" rel="noopener noreferrer"&gt;https://github.com/dolbyio-samples/media-azure-serverless-workflow&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To review, our media follows a two-step workflow. From the Azure container, the file is diagnosed to return an audio quality score. If the audio scores below a quality threshold of 7 we opt not to transcribe the audio and instead enhance the audio. If the media scores at or above that threshold we then pass the file onto the next stage for transcription. Once the transcription is complete the file is then deposited on the container. &lt;/p&gt;

&lt;p&gt;This example serves as an introduction to building a Serverless media workflow on Azure. By using the tips described above it is possible to add any number of extra steps into the workflow including APIs that check for profanity or APIs that translate foreign languages, the possibilities are endless. If you are interested in learning more about the workflow outlined above to check out the team's presentation at the 2021 Azure Serverless Conference where we go into more detail otherwise feel free to explore some of our other great projects &lt;a href="https://github.com/dolbyio-samples" rel="noopener noreferrer"&gt;over at the Dolby.io Samples Github&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>python</category>
      <category>azure</category>
      <category>cloud</category>
      <category>media</category>
    </item>
    <item>
      <title>"Take Me Out to the Ball Game" Algorithmically Remastered in Python</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Sat, 09 Apr 2022 19:54:55 +0000</pubDate>
      <link>https://dev.to/dolbyio/take-me-out-to-the-ball-game-algorithmically-remastered-in-python-2o96</link>
      <guid>https://dev.to/dolbyio/take-me-out-to-the-ball-game-algorithmically-remastered-in-python-2o96</guid>
      <description>&lt;p&gt;The 1908&lt;a href="https://en.wikipedia.org/wiki/Jack_Norworth" rel="noopener noreferrer"&gt; Jack Norworth&lt;/a&gt; and&lt;a href="https://en.wikipedia.org/wiki/Albert_Von_Tilzer" rel="noopener noreferrer"&gt; Albert Von Tilzer&lt;/a&gt; song "Take Me Out to the Ball Game" has been a classic staple across ballparks becoming synonymous with America's favorite pastime. At over 114 years old, the original version of "Take Me Out to the Ball Game" was recorded on a &lt;a href="https://en.wikipedia.org/wiki/Phonograph_cylinder" rel="noopener noreferrer"&gt;two-minute Edison Wax Cylinder&lt;/a&gt; by singer and performer &lt;a href="https://en.wikipedia.org/wiki/Edward_Meeker" rel="noopener noreferrer"&gt;Edward Meeker&lt;/a&gt;, quickly becoming a beloved classic.  With the baseball season getting underway this &lt;a href="https://www.mlb.com/news/mlb-revised-2022-regular-season-schedule" rel="noopener noreferrer"&gt;April 7th&lt;/a&gt; we thought it about time to dust off the gloves, pick up our bats, step up to our Python environments, and get to work algorithmically remastering the classic anthem with the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Music Mastering API.&lt;/p&gt;

&lt;p&gt;Typically performed by audio engineers, Mastering is a labor-intensive post-production process usually applied as the last step in creating a song and is the final polish that takes a track from good to great. Because "Take Me Out to the Ball Game" was recorded and produced in 1908 the mastering and post-production technology was very limited and hence it could be interesting to explore the impact of applying a music mastering algorithm on the original recording, and the effect that has on the palatability of the track.&lt;/p&gt;

&lt;h3&gt;
  
  
  Picking a Version:
&lt;/h3&gt;

&lt;p&gt;Before we can get started remastering "Take Me Out to the Ball Game", we first need to pick a version of the song. Whilst we often hear the catchy tune played during the middle of the seventh inning, that version isn't the original and is subject to copyright protection. For this project, we will be using the 1908 version found &lt;a href="https://ia802605.us.archive.org/26/items/TakeMeOutToTheBallGame_243/TakeMeOuttotheBallGame_edmeeker.mp3" rel="noopener noreferrer"&gt;here&lt;/a&gt;, as it is now available in the &lt;a href="http://publicdomainaudiovideo.blogspot.com/2010/04/take-me-out-to-ball-game.html" rel="noopener noreferrer"&gt;public domain and free to use&lt;/a&gt;. Unfortunately, the highest-quality version of the 1908 song is stored as an MP3. Whilst this works with the API, Free Lossless Audio Codec (FLAC) or other lossless file types are preferred as they produce the best results during the mastering post-production process.  &lt;/p&gt;

&lt;h3&gt;
  
  
  The Music Mastering API:
&lt;/h3&gt;

&lt;p&gt;With our song in hand, it's time to introduce the tool that will be doing the majority of the heavy lifting. &lt;a href="https://dolby.io/products/music-mastering/" rel="noopener noreferrer"&gt;The &lt;/a&gt;&lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt;&lt;a href="https://dolby.io/products/music-mastering/" rel="noopener noreferrer"&gt; Music Mastering API &lt;/a&gt;is a music enhancement tool that allows users to programmatically master files via a number of sound profiles specific to certain genres and styles. The API isn't free, however, the company offers trial credits if you sign up and additional trial credits if you add your credit card to the platform.&lt;/p&gt;

&lt;p&gt;For this project, the trial tier will be sufficient which is available if you &lt;a href="https://dashboard.dolby.io/signup" rel="noopener noreferrer"&gt;sign up here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you have created an account and logged in, navigate over to the applications tab, select "my_first_app", and locate your Media APIs API Key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl5yuz6a0dlqi985fc0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl5yuz6a0dlqi985fc0j.png" alt="Dolby.io Dashboard. In this guide we will explore how to algorithmically remaster the classic baseball anthem " width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An example screenshot of the Dolby.io dashboard.&lt;/p&gt;

&lt;p&gt;It's important to note that all &lt;a href="http://dobly.io/" rel="noopener noreferrer"&gt;Dobly.io&lt;/a&gt; media APIs adhere to the[ REST framework,](&lt;a href="https://www.redhat.com/en/topics/api/what-is-a-rest-api#:%7E:text=A%20REST%20API%20(also%20known,by%20computer%20scientist%20Roy%20Fielding.)%C2%A0meaning" rel="noopener noreferrer"&gt;https://www.redhat.com/en/topics/api/what-is-a-rest-api#:~:text=A%20REST%20API%20(also%20known,by%20computer%20scientist%20Roy%20Fielding.) meaning&lt;/a&gt; they are language agnostic. For the purposes of this project, I will be using the tool in Python, however, it works in any other language.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding it to the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Server
&lt;/h3&gt;

&lt;p&gt;To utilize the Music Mastering API we first need to store the MP3 file on the cloud. This can be done with either a cloud service provider such as &lt;a href="https://docs.dolby.io/media-apis/docs/aws-s3" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;, or you can use the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Media Storage platform. For simplicity, we will use the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; platform which can be accessed via a REST API call.&lt;/p&gt;

&lt;p&gt;To get started we need to import the Python "Requests" package and specify a path to the MP3 file on our local machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests #Requests is useful for making HTTP requests and interacting with REST APIs
file_path = "Take-Me-Out-to-the-Ball-Game.mp3"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to specify the URL we want the Requests package to interact with, specifically the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Media Input address. In addition to the input URL, we also need to format a header that will authenticate our request to the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; server with our API key.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;url = "https://api.dolby.com/media/input"
headers = {
    "x-api-key": "YOUR DOLBY.IO MEDIA API KEY",
    "Content-Type": "application/json",
    "Accept": "application/json",
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Finally, we need to format a body that specifies the name we want to give our file once it is added to the server.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;body = {
    "url": "dlb://input-example.mp3",
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;With the URL, Head, and Body all formatted correctly we can use the Requests package to create a pre-signed URL to which we can upload our MP3 file to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = requests.post(url, json=body, headers=headers)
response.raise_for_status()
presigned_url = response.json()["url"]

print("Uploading {0} to {1}".format(file_path, presigned_url))
with open(file_path, "rb") as input_file:
    requests.put(presigned_url, data=input_file)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Starting a Mastering Job
&lt;/h3&gt;

&lt;p&gt;Once the audio file has been moved to the cloud we can begin calling a mastering job. The Music Mastering API includes a number of predefined "profiles" which match up to a selection of audio genres such as Hip Hop or Rock. For the best results, a Rock song should be mastered with the Rock profile, however, the process of picking a profile can require a bit of experimentation.&lt;/p&gt;

&lt;p&gt;Because matching creative intent with different sound profiles can take a few trials the API offers a "preview version" where you can master a 30 seconds segment of a song with 3 different profiles. We format the body of this request to include this information as well as when we want the segment to begin.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;body = {
    "inputs": [
        {"source": "dlb://input-example.mp3", "segment": {"start": 36, "duration": 30}} #36 seconds is the start of the iconic chorus.
    ],
    "outputs": [
        {
            "destination": "dlb://example-master-preview-l.mp3",
            "master": {"dynamic_eq": {"preset": "l"}} #Lets master with the Vocal profile
        },
        {
            "destination": "dlb://example-master-preview-m.mp3",
            "master": {"dynamic_eq": {"preset": "m"}} #Lets master with the Folk profile
        },
        {
            "destination": "dlb://example-master-preview-n.mp3",
            "master": {"dynamic_eq": {"preset": "n"}} #Lets master with the Classical profile
        }

    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Header stays the same as the one we used to upload the file to the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Server and the URL changes to match the Music Mastering endpoint.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;url = "https://api.dolby.com/media/master/preview"
headers = {
    "x-api-key": "YOUR DOLBY.IO MEDIA API KEY",
    "Content-Type": "application/json",
    "Accept": "application/json",
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can use the Requests package to deliver our profile selections and start the mastering job.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = requests.post(url, json=body, headers=headers)
response.raise_for_status()
print(response.json())
job_id = response.json()["job_id"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This process can take a minute to complete. To check the status of the job we can format another request to the same URL with the Job_ID included in the body to check the progress of the master.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;url = "https://api.dolby.com/media/master/preview"
headers = {
        "x-api-key": "YOUR DOLBY.IO MEDIA API KEY",
        "Content-Type": "application/json",
        "Accept": "application/json",
    }
params = {"job_id": job_id}
response = requests.get(url, params=params, headers=headers)
response.raise_for_status()
print(response.json())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The response from the request outputs the progress of the job between 0% and 100%.&lt;/p&gt;
&lt;h3&gt;
  
  
  Downloading the Mastered File
&lt;/h3&gt;

&lt;p&gt;With our file mastered it's time to download the three master previews so we can hear the difference. The workflow for downloading files mirrors that of how the rest of the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; APIs work. Much like uploading a file or starting a job, we format a header with our API key and a body that points to the mastering output on the Dolby.io server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import shutil #File operations package useful for downloading files from a server.

url = "https://api.dolby.com/media/output"
headers = {
        "x-api-key": api_key,
        "Content-Type": "application/json",
        "Accept": "application/json",
    }

for profile in ["l","m","n"]:

    output_path = "out/preview-" + profile + ".mp3"

    preview_url = "dlb://example-master-preview-" + profile + ".mp3"
    args = {"url": preview_url}

    with requests.get(url, params=args, headers=headers, stream=True) as response:
        response.raise_for_status()
        response.raw.decode_content = True
        print("Downloading from {0} into {1}".format(response.url, output_path))
        with open(output_path, "wb") as output_file:
            shutil.copyfileobj(response.raw, output_file)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16tvgbxj0tfmaiqsje86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16tvgbxj0tfmaiqsje86.png" alt="Music Mastering workflow. In this guide we will explore how to algorithmically remaster the classic baseball anthem " width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Summary of mastering job workflow from locating the file to downloading results.&lt;/p&gt;

&lt;p&gt;With the mastered files downloaded locally, we can listen to both and hear the difference between the original and one of our Masters. &lt;/p&gt;

&lt;p&gt;The original version of "Take Me Out to the Ball Game", sung by Edward Meeker in 1908.&lt;/p&gt;

&lt;p&gt;Mastered version of "Take Me Out to the Ball Game" with the Dolby.io Classical music profile (Profile n) focusing on wide dynamics, and warm full tones for orchestral instruments.&lt;/p&gt;

&lt;p&gt;We can also hear the subtle differences between the Masters.&lt;/p&gt;

&lt;p&gt;Mastered version of "Take Me Out to the Ball Game" with the Dolby.io Vocal music profile (Profile l) focusing on the mid-frequencies to highlight vocals.&lt;/p&gt;

&lt;p&gt;Mastered version of "Take Me Out to the Ball Game" with the Dolby.io Folk music profile (Profile m) focusing on light touch with ample mid-frequency clarity to let acoustic instruments shine in the mix.&lt;/p&gt;

&lt;p&gt;Mastered version of "Take Me Out to the Ball Game" with the Dolby.io Classical music profile (Profile n) focusing on wide dynamics, and warm full tones for orchestral instruments.&lt;/p&gt;

&lt;p&gt;For the purposes of this demo, we only mastered with the last three profiles, however, there are &lt;a href="https://docs.dolby.io/media-apis/docs/music-mastering-api-guide" rel="noopener noreferrer"&gt;14 different music mastering profiles&lt;/a&gt; to pick from. From my testing, I like the "Classical" profile (Profile n) the best, but everyone is different, try it out yourself. &lt;/p&gt;

&lt;h3&gt;
  
  
  A More Modern Example
&lt;/h3&gt;

&lt;p&gt;Whilst the classic still doesn't sound modern, remastering the track does make it a little more clear and hence more enjoyable to listen to. Typically the &lt;a href="http://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Music Mastering API is built for contemporary samples recorded on more modern equipment in lossless formats such as FLAC and is not designed to be an audio restoration tool. For the purposes of this investigation, we wanted to see the impact post-production mastering would have on the track rather than attempting to outright "fix" the original. &lt;/p&gt;

&lt;p&gt;Currently, the Dolby.io team has a &lt;a href="https://static.dolby.link/demos/music-mastering/index.html" rel="noopener noreferrer"&gt;demo hosted here&lt;/a&gt; that lets you listen to before and after examples of licensed contemporary tracks which better exemplifies the use case of the API. Because Dolby.io owns the licenses to those songs they are allowed to host the content, whereas for this project I wanted to pick a track in the public domain so anyone with an interest can test it out for themselves without fear of infringing on copyright law.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1ixo0qenmyyb2g7ylv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1ixo0qenmyyb2g7ylv9.png" alt="Offical Dolby.io Music Mastering Demo. Using Music Mastering on " width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Dolby.io Music Mastering demo, &lt;a href="https://static.dolby.link/demos/music-mastering/index.html" rel="noopener noreferrer"&gt;available here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If the Music Mastering API is something you are interested in further exploring check out the &lt;a href="https://docs.dolby.io/media-apis/docs/music-mastering-api-guide" rel="noopener noreferrer"&gt;dolby.io documentation&lt;/a&gt; around the API or the &lt;a href="https://static.dolby.link/demos/music-mastering/index.html" rel="noopener noreferrer"&gt;live demo mentioned above&lt;/a&gt;, otherwise let's get excited for an awesome Baseball season ahead and "&lt;em&gt;root, root, root for the home team"&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>python</category>
      <category>machinelearning</category>
      <category>tutorial</category>
      <category>audio</category>
    </item>
    <item>
      <title>Generating Pre-Signed URLs for Azure Cloud Storage with Python</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Thu, 07 Apr 2022 20:31:31 +0000</pubDate>
      <link>https://dev.to/dolbyio/generating-pre-signed-urls-for-azure-cloud-storage-with-python-1ilo</link>
      <guid>https://dev.to/dolbyio/generating-pre-signed-urls-for-azure-cloud-storage-with-python-1ilo</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Media files stored through Microsoft's Azure cloud platform can be easily integrated with Dolby.io to create a pipeline for media enhancement and analysis, allowing Azure users to enrich and understand their audio, all from the cloud. In this guide, we will explore how to integrate Dolby.io's Media Processing APIs with Azure Blob Storage to help users enhance their audio in a simple and scalable way.&lt;/p&gt;

&lt;h2&gt;
  
  
  What do we Need to Get Started
&lt;/h2&gt;

&lt;p&gt;Before we get started there are five parameters you need to make sure you have ready for integration with Azure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Azure Account Name: The name of your Azure account.&lt;/li&gt;
&lt;li&gt; Azure Container Name: The name of the Container where your file is located.&lt;/li&gt;
&lt;li&gt; Azure Blob Name: The name of the Blob where your file is stored.&lt;/li&gt;
&lt;li&gt; Azure Primary Access Key: &lt;a href="https://portal.azure.com/#@Dolby.onmicrosoft.com/resource/subscriptions/8f3c8ac7-1951-42a0-85bc-d251dde17935/resourceGroups/testing_cont/providers/Microsoft.Storage/storageAccounts/testingaccdolby/keys" rel="noopener noreferrer"&gt;The Primary Access Key for your Azure storage account&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt; &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; API Key: The &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; mAPI key found on your Dolby.io dashboard.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These parameters help direct &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; to the appropriate location for finding your cloud-stored file, as well as handling necessary approval for accessing your privately stored media.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Python
&lt;/h2&gt;

&lt;p&gt;To begin building this integration we first need to install Azure.Storage.Blob v12.8.1. It is important to note that installing the Azure.Storage.Blob Python package differs from just installing the base Azure SDK, so make sure to specifically install Azure.Storage.Blob v12.8.1 as shown in the code below. Once installed we can import the Python datetime, requests, and time packages, along with generate_blob_sas and BlobSasPermissions functions from Azure.Storage.Blob.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!pip install azure.storage.blob==12.8.1

from datetime import datetime, timedelta #For setting the token validity duration
import requests                          #For using the Dolby.io REST API
import time                              #For tracking progress of our media processing job

#Relevent Azure tools
from azure.storage.blob import (
    generate_blob_sas,
    BlobSasPermissions
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Next, we define our input parameters. We specify both an input and an output for our Blob files. The input represents the name of the stored file on the server and the output represents the name of the enhanced file that will be placed back onto the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# AZURE
AZURE_ACC_NAME = 'your-account-name'
AZURE_PRIMARY_KEY = 'your-account-key'
AZURE_CONTAINER = 'your-container-name'
AZURE_BLOB_INPUT='your-unenhanced-file'
AZURE_BLOB_OUTPUT='name-of-enhanced-output'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to define some &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; parameters including the server and the function applied to your files. In this case, we pick enhance and follow up by defining our &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; Media Processing API key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# DOLBY
server_url = "https://api.dolby.com"
url = server_url +"/media/enhance"
api_key = "your Dolby.io Media API Key"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With all our variables defined, we can now create the Shared Access Signatures (SAS) the &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; API will use to find the files. To do this we use the generate_blob_permissions function in conjunction with the BlobSasPermissions function and the datetime function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;input_sas_blob = generate_blob_sas(account_name=AZURE_ACC_NAME,
                                container_name=AZURE_CONTAINER,
                                blob_name=AZURE_BLOB_INPUT,
                                account_key=AZURE_PRIMARY_KEY,
                                permission=BlobSasPermissions(read=True),
                                expiry=datetime.utcnow() + timedelta(hours=1))

output_sas_blob = generate_blob_sas(account_name=AZURE_ACC_NAME,
                                container_name=AZURE_CONTAINER,
                                blob_name=AZURE_BLOB_OUTPUT,
                                account_key=AZURE_PRIMARY_KEY,
                                permission=BlobSasPermissions(read=True, write=True, create=True),
                                expiry=datetime.utcnow() + timedelta(hours=1))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note this in the code sample above we define both an input and an output SAS Blob. For the input SAS, we only need to give the signature the ability to read in files, however, for the output SAS, we need to give it the ability to create a file on the Azure server and then write to that file. We also specify how long we want our signatures to be valid. In this case, the links are valid for one hour, however, for larger jobs, we may need to increase this window of validity.&lt;/p&gt;

&lt;p&gt;With our SAS tokens created we now need to format the tokens into URLs. Again we need to create two URLs, one for the input and one for the output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'https://'+AZURE_ACC_NAME+'.blob.core.windows.net/'+AZURE_CONTAINER+'/'+AZURE_BLOB_OUTPUT+'?'+output_sas_blob
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using both SAS URLs we plug the values into the &lt;a href="https://dolby.io/" rel="noopener noreferrer"&gt;Dolby.io&lt;/a&gt; API and initiate the media processing job. The unique identifier for the job is captured in the job_id parameter which we can use to track progress.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;body = {
  "input" : input_sas,
  "output" : output_sas
}

headers = {
  "x-api-key":api_key,
  "Content-Type": "application/json",
  "Accept": "application/json"
}

response = requests.post(url, json=body, headers=headers)
response.raise_for_status()
job_id = response.json()["job_id"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note how the input and output of the body have their corresponding URLs assigned. &lt;/p&gt;

&lt;p&gt;Our job has now begun. To track the progress of the job we can create a loop that reports job status.&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;while True:
    headers = {
      "x-api-key": api_key,
      "Content-Type": "application/json",
      "Accept": "application/json"
    }

    params = {"job_id": job_id}

    response = requests.get(url, params=params, headers=headers)
    response.raise_for_status()
    print(response.json())

    if response.json()["status"] == "Success" or response.json()["status"] == "Failed":
        break

    time.sleep(20)

print("response.json()["status"]")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the job has been completed the loop will exit and our enhanced file will be visible on the Azure Blob Storage server. Alternatively, instead of looping through such as that seen in the example above, Dolby.io offers functionality for &lt;a href="https://docs.dolby.io/media-processing/docs/webhooks-and-callbacks" rel="noopener noreferrer"&gt;webhooks and callbacks&lt;/a&gt; which can be used to notify users of a jobs competition. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, once we have created an Azure valid SAS, the process for integrating Azure with Dolby.io is simple, allowing for seamless integration between the two services. If you are interested in learning more about how to integrate Azure with Dolby.io or explore examples in alternative coding languages &lt;a href="https://docs.dolby.io/media-processing/docs/azure-blob-storage" rel="noopener noreferrer"&gt;check out our documentation here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>python</category>
      <category>cloud</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Creating A Fixed Place Spatial Environment for Video Conferencing</title>
      <dc:creator>Braden Riggs</dc:creator>
      <pubDate>Thu, 07 Apr 2022 00:47:12 +0000</pubDate>
      <link>https://dev.to/dolbyio/creating-a-fixed-place-spatial-environment-for-video-conferencing-1623</link>
      <guid>https://dev.to/dolbyio/creating-a-fixed-place-spatial-environment-for-video-conferencing-1623</guid>
      <description>&lt;p&gt;Being in a virtual meeting where the audio comes at you can be a jarring and awkward experience as audio tracks overlap and speakers become indiscernible. Spatial audio, also known as 3D audio is a more natural approach to solving this problem. Traditionally non-spatial audio is subject to the walkie-talkie effect, where sound is flattened and output to the user via one or two channels of data in an experience that can feel like the audio is coming at you all at once. This limitation is corrected with spatial audio as sound instead comes from around you, helping create an environment where conversations blend more naturally like they would in the physical world. &lt;/p&gt;

&lt;p&gt;In this blog post, we'll show how you can set up a 2D virtual video conference where participants will speak from fixed spatial perspectives using Dolby.io Spatial Audio and the Dolby.io Web SDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  Account Setup
&lt;/h2&gt;

&lt;p&gt;To get started with creating a static spatial audio scene you first need to have &lt;a href="https://dolby.io/signup" rel="noopener noreferrer"&gt;signed up for a free Dolby.io account&lt;/a&gt;. The free tier of Dolby.io awards you trial credit and doesn't require a credit card.&lt;/p&gt;

&lt;p&gt;Once signed up and logged in, scroll down to the "applications" section and create a new app titled "Static_spatial". After naming the application your page will open to a list of API keys, take note of the &lt;strong&gt;Communications API Consumer Key and Consumer Secret&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dolby.io Web SDK
&lt;/h2&gt;

&lt;p&gt;With your account set up and your API keys on hand, we can get started with creating a static spatial video conference using the Dolby.io Web SDK. &lt;/p&gt;

&lt;p&gt;If you haven't worked with our Web SDK before I recommend first building a basic web application by &lt;a href="https://docs.dolby.io/communications-apis/docs/getting-started-with-the-javascript-sdk" rel="noopener noreferrer"&gt;following our Web SDK Getting Started guide&lt;/a&gt;. If you are already familiar with the guide &lt;strong&gt;we can start by&lt;/strong&gt; &lt;a href="https://github.com/dolbyio-samples/blog-spatial-audio-getting-started" rel="noopener noreferrer"&gt;&lt;strong&gt;cloning the Fixed Place Spatial Demo repository&lt;/strong&gt;&lt;/a&gt;. This project largely builds off of the &lt;a href="https://github.com/dolbyio-samples/comms-sdk-web-getting-started/tree/Fixed-Place-Spatial-Demo" rel="noopener noreferrer"&gt;original Web SDK getting started guide&lt;/a&gt; by adding user interface adjustments for creating a spatial experience. Whilst this blog will mainly focus on implementing a fixed spatial audio experience, all of the code is included in the repository in case you are interested in reviewing the interface changes as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with a Spatial Environment
&lt;/h2&gt;

&lt;p&gt;Once you have cloned the &lt;a href="https://github.com/dolbyio-samples/blog-spatial-audio-getting-started" rel="noopener noreferrer"&gt;Fixed Place Spatial Demo project&lt;/a&gt;, the next step to adding spatial to your app is enabling it in the creation of your conference. To do this we assign an alias that can be user-defined or static depending on the scope of your web app and enable &lt;a href="https://docs.dolby.io/communications-apis/docs/guides-dolby-voice" rel="noopener noreferrer"&gt;Dolby Voice&lt;/a&gt;, a tool that optimizes bandwidth utilization and suppresses background noise and audio defects in real-time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let conferenceOptions = {
    alias: "spatialTestConf", // Can be user defined
    params: {dolbyVoice: true}, //Required for spatial audio
};

VoxeetSDK.conference.create(conferenceOptions)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With a &lt;a href="https://docs.dolby.io/communications-apis/docs/guides-dolby-voice" rel="noopener noreferrer"&gt;Dolby Voice enabled&lt;/a&gt; conference created, we now activate spatial audio. It is important to note that these API calls are &lt;a href="https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Asynchronous/Introducing" rel="noopener noreferrer"&gt;asynchronous operations&lt;/a&gt;, meaning they can vary slightly in execution time and hence rely on promises to resolve, so including the await operator or a .then() function is required.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//Start conference with audio and video turned off
VoxeetSDK.conference.join(conference, {
                    constraints: { audio: false, video: false },
                    spatialAudio: true,
                })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Setting a Spatial Scene&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Part of integrating spatial audio into a web application is defining the spatial "scene", or rather how the audio renderer interprets what is "Forward" or what is "Right".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwdb6bskhyn6oprl5zd0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwdb6bskhyn6oprl5zd0.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We define these directions on an "x, y, z-axis", where the larger "x" is the further to the right noise is heard, and the smaller "y" is the more participants at the top of the screen are heard from the front. In this case, the 3rd direction is irrelevant as our conference is only represented on a 2-dimensional plane, however, three-dimensional projects would also have to define an "up" axis as "z".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Negative Y axis is heard in the forwards direction, so tell the SDK forward has -1 Y
const forward = { x: 0, y: -1, z: 0 };

// Upwards axis is unimportant for this case, we can set it to either Z = +1 or Z -1,
// we never provide a Z position
const up  = { x: 0, y: 0, z: 1 };

// Positive X axis is heard in the right-hand direction, so tell the SDK right has +1 X
const right  = { x: 1, y: 0, z: 0 };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition to directions, we also define a scale that mimics the physical world. In real life we define the hearing limit as the furthest possible distance you can hear someone. Whilst a variety of factors influence this limit in the real world, in the Dolby.io virtual world this limit is capped at 100 meters, so a person further than 100 meters away wouldn't be heard. This raises a question though, what is a meter in the virtual space? We define this as the "scale" parameter which can be converted from user defined units.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// scale for Z axis doesn't matter as we never provide a Z position, set it to 1
// We set the scale as 1:10, so as we move one unit in the virtual world, our hearing changes as // if we have moved 10 meters in the physical world.
 const scale = {
                    x: 0.1,
                    y: 0.1,
                    z: 1,
                };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the purposes of our virtual conference, we want the scale to be defined by a 1:10 ratio, meaning that a guest who is assigned an "x" position 5 units greater than you would sound 50 meters further away in the "x" direction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Spatial Position
&lt;/h2&gt;

&lt;p&gt;With the scale set and spatial audio enabled we now need to make sure everyone is assigned their spatial location as they join.&lt;/p&gt;

&lt;p&gt;How people's spatial location is selected will depend on the layout of your web app, in our example code, we will be using a 3x3 square grid allowing for a maximum of 9 participants to join the web conference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3drrcbsntftisxax11u9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3drrcbsntftisxax11u9.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To appropriately assign spatial location we must track who joined and when. One way to do this is to define a posList object composed of 9 different arrays each containing an undefined participant ID and different position combinations. With both these lists created we next need to assign spatial positions to the attendees in the order of left to right, top to bottom as they arrive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let posList = [
    [undefined, 1, 1],
    [undefined, 2, 1],
    [undefined, 3, 1],
    [undefined, 1, 2],
    [undefined, 2, 2],
    [undefined, 2, 3],
    [undefined, 3, 1],
    [undefined, 3, 2],
    [undefined, 3, 3],
];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are a variety of different ways we can work to associate a particular participant with a spatial location. For example, our &lt;a href="https://docs.dolby.io/communications-apis/docs/guides-integrating-spatial-audio" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; maps the audio locations to the participant's center video. In our case, we will take a different approach that works by using a for loop to iterate over an array that records the participant’s ID. For each participant ID, the loop will then assign the corresponding spatial position according to the count. I.e. the 1st person would be assigned the array [personOneID, 1, 1] which corresponds to the first square along the top row and would sound 10 meters away in the x-direction when heard by someone who was assigned the second spatial position [personTwoID, 2, 1].&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx56mt5ebaggty7l1nc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx56mt5ebaggty7l1nc7.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the position is assigned we can use the setSpatialPosition function. This function takes in a newly joined participant and assigns them to the next available cell. This means the first person will be assigned the top left square, the second person would be assigned the top center square, the third person will be assigned the top right square, and so on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
//Function for altering spatial positions as guests join
const setSpatialPosition = (participant) =&amp;gt; {
    let spatialPosition = { x: 0, y: 0, z: 0 }; //default spatial position

    //loop over posList
    for (let i = 0; i &amp;lt; posList.length; i++) {
        //If posList[i] has no assigned participantID, assign one
        if (!posList[i][0] || participant.id == posList[i][0]) {
            posList[i][0] = participant.id;

            //Assigned spatial position based on join order
            spatialPosition = {
                x: posList[i][1],
                y: posList[i][2],
                z: 0, // Only 2d so "z" is never changed
            };

            break;
        }
    }
    VoxeetSDK.conference.setSpatialPosition(participant, spatialPosition);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the sample code provided we included a banner that displays the spatial position of the user in terms of "x, y, and z".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vkdldfbiu65l8mdikfm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7vkdldfbiu65l8mdikfm.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it out yourself
&lt;/h2&gt;

&lt;p&gt;Now that we have the theory out of the way we can boot up the sample app and try it out. the first step to getting the spatial app up and running is to update the last two rows of the "scripts/client.js" file with your &lt;strong&gt;Communications API Consumer Key and Consumer Secret&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//Update scripts/client.js with your API Keys
main(
    "Insert your Communications APIs Consumer Key here",
    "Insert your Communications APIs Consumer Secret here"
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, simply open the file "index.html" in your web browser and start playing with the application.&lt;/p&gt;

&lt;p&gt;It is important to note that hard coding your API keys into the client.js file is only for testing and &lt;strong&gt;should not be used for production as the key are not secure and could be stolen&lt;/strong&gt;. Instead we we recommend using a token to initialize the SDK. For more information, see Initializing or &lt;a href="https://docs.dolby.io/communications-apis/docs/guides-security-best-practices" rel="noopener noreferrer"&gt;learn about security best practices&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Spatial audio opens the door to a range of possibilities when building your web conferencing app, such as virtual events, meeting spaces, and collaboration tools. For this blog we kept it simple with a fixed place example, however, the tools work just as well for building a dynamically updating web app that adjusts spatial audio as the users move around in a 2D or 3D environment.&lt;/p&gt;

&lt;p&gt;Whatever your next spatial project is, the Dolby.io team is here to help. Connect with us &lt;a href="https://support.dolby.io/hc/en-au" rel="noopener noreferrer"&gt;here&lt;/a&gt; or check out a few of our helpful resources to dive deeper into the awesome world of communication and spatial audio:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dolby.io/blog/enabling-spatial-audio-in-your-web-applications/" rel="noopener noreferrer"&gt;Enabling Spatial Audio in Your Web Applications&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.dolby.io/communications-apis/docs/guides-integrating-spatial-audio" rel="noopener noreferrer"&gt;The Dolby.io Documentation for Integrating Spatial Audio&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.dolby.io/communications-apis/docs/js-reference" rel="noopener noreferrer"&gt;The Dolby.io Web SDK reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.dolby.io/communications-apis/docs/guides-security-best-practices" rel="noopener noreferrer"&gt;Best Security Practices When Handling API Keys&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>spatialaudio</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
