<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mux</title>
    <description>The latest articles on DEV Community by Mux (@mux).</description>
    <link>https://dev.to/mux</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mux"/>
    <language>en</language>
    <item>
      <title>Kafka Connect: The Magic Behind Mux Data Realtime Exports</title>
      <dc:creator>Scott Kidder</dc:creator>
      <pubDate>Fri, 15 Jan 2021 23:36:38 +0000</pubDate>
      <link>https://dev.to/mux/kafka-connect-the-magic-behind-mux-data-realtime-exports-e7</link>
      <guid>https://dev.to/mux/kafka-connect-the-magic-behind-mux-data-realtime-exports-e7</guid>
      <description>&lt;p&gt;At &lt;a href="https://mux.com"&gt;Mux&lt;/a&gt; we’ve seen an increasing demand for access to the raw &amp;amp; enriched video QoS data processed by &lt;a href="https://mux.com/data"&gt;Mux Data&lt;/a&gt;. Moreover, it’s not sufficient for these exports to be available on a daily or even hourly basis; many customers want access to low-latency, real-time data streams with a latency of one minute or less, glass-to-glass. We’ve made this possible for Mux Data Enterprise customers through our real-time metric and event-stream exports.&lt;/p&gt;

&lt;p&gt;Mux Data customers are building their own impressive applications that take action on the realtime streams, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitoring the performance of CDNs and networks to enable CDN switching &amp;amp; troubleshooting&lt;/li&gt;
&lt;li&gt;Identifying popular videos as they become viral to increase promotion&lt;/li&gt;
&lt;li&gt;Joining Mux Data metrics with their own internal metrics to speed up troubleshooting and root-cause analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The number of uses is limited only by one’s imagination!&lt;/p&gt;

&lt;p&gt;However, this presents an obvious challenge for us in managing the numerous streams destined for customer streaming systems.&lt;/p&gt;

&lt;p&gt;Mux uses &lt;a href="https://kafka.apache.org/"&gt;Apache Kafka&lt;/a&gt; as our internal streaming platform. Mux customers might use &lt;a href="https://aws.amazon.com/kinesis/"&gt;AWS Kinesis&lt;/a&gt;, &lt;a href="https://cloud.google.com/pubsub"&gt;Google Cloud PubSub&lt;/a&gt;, Kafka, or some other streaming service. When faced with the problem of bridging data in our Kafka clusters to numerous external streaming services, we chose to “stand on the shoulders of giants” and use battle-tested open-source software to power the streaming export system as much as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter, Kafka Connect
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://docs.confluent.io/platform/current/connect/index.html"&gt;Kafka Connect&lt;/a&gt; project is part of the Apache Kafka ecosystem. &lt;a href="https://docs.confluent.io/3.0.1/connect/intro.html#"&gt;To quote Confluent&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka and other data systems. It makes it simple to quickly define connectors that move large data sets into and out of Kafka.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Anyone looking to stream data between Kafka and other data systems should first look to Kafka Connect.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kafka Connect can run in either a standalone or distributed mode.&lt;/p&gt;

&lt;p&gt;The standalone mode uses a maximum of one server, and the connector configuration is not fault-tolerant. This should only be used for testing or development.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jnKgM8OO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wl3x29bl6p00zak505qu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jnKgM8OO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/wl3x29bl6p00zak505qu.png" alt="2ccc7f6d4e0e24c6265460cf97688852ddcf396e-716x306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most production deployments of Kafka Connect run in a distributed mode. In this mode, a Kafka Connect connector (analogous to a job), can run across multiple Kafka Connect instances that make up a cluster, allowing for horizontal scalability &amp;amp; fault-tolerance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8Jth05vX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ujfuqhjrsxi91m21wua1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8Jth05vX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ujfuqhjrsxi91m21wua1.png" alt="f709284f0c580413126eb0dc66c0f05de4495fa1-746x593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kafka Connect connector configurations are stored in an Apache Kafka topic, ensuring durability. Connector configurations are managed using the &lt;a href="https://docs.confluent.io/platform/current/connect/references/restapi.html"&gt;Kafka Connect REST API&lt;/a&gt; which can be accessed via any of the Kafka Connect instances in the cluster. A connector configuration describes the source (e.g. Kafka cluster &amp;amp; topic), sink (e.g. external AWS Kinesis stream), and any transformations to be applied along the way.&lt;/p&gt;

&lt;p&gt;One of the best parts about Kafka Connect is that sources &amp;amp; sinks must only implement a single Kafka Connect Java Interface, making them extremely easy to write. This permits your source or sink to focus on reading or writing from the remote system. It's possible to write a source or sink for just about any system imaginable! All other concerns like offset management, consumer rebalancing, error-handling, and more are handled by Kafka Connect itself. Simply package your source/sink code in a JAR, include it on the Kafka Connect classpath, and you’re ready to reference it from a connector.&lt;/p&gt;

&lt;p&gt;As an engineer working on streaming data systems, I’m certain I’ve written software equivalent to Kafka Connect several times during my career. It’s a tedious and error-prone process. Being able to delegate the majority of the work to a popular, well-supported open-source tool is a tremendous relief.&lt;/p&gt;

&lt;p&gt;Mux uses Kafka Connect to manage the realtime event-stream exports to external streaming services. In the 9 months that we’ve been using Kafka Connect in production, it’s been extremely reliable, scalable, and easy to extend &amp;amp; customize.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Kafka Connect
&lt;/h2&gt;

&lt;p&gt;Let’s walk through the process of deploying a Kafka Connect cluster on Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Build your Kafka Connect Docker Image
&lt;/h3&gt;

&lt;p&gt;A lot of open-source software is designed to be run as-is, without modifications. I was surprised to learn that &lt;em&gt;this does not apply to Kafka Connect&lt;/em&gt;. Expect to add connectors to Kafka Connect.&lt;/p&gt;

&lt;p&gt;If you’re deploying Kafka Connect as a &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt; container, then this involves adding image layers with source or sink JARs (Java Archives of compiled code) that provide the extra or customized functionality you need.&lt;/p&gt;

&lt;p&gt;In our case, we add sink connectors for AWS Kinesis and Google Cloud PubSub. We also add a &lt;a href="https://github.com/prometheus/jmx_exporter"&gt;Prometheus exporter&lt;/a&gt; JAR that scrapes the Kafka Connect JMX metrics and exposes them as Prometheus metrics.&lt;/p&gt;

&lt;p&gt;Here's a sample Dockerfile that pulls adds these dependencies to the Confluent Kafka Connect base-image:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;We compile the JARs, place them in the same directory as the Dockerfile shown above, and run &lt;code&gt;docker build&lt;/code&gt;. The resulting Docker image is pushed to a Docker image repository where it can be pulled and run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Start up your Kafka Connect Cluster
&lt;/h3&gt;

&lt;p&gt;Nearly all of the infrastructure at Mux runs in Kubernetes clusters. Kafka Connect is no exception. The following Kubernetes manifest shows how you might run the Kafka Connect Docker image in a Kubernetes cluster:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;At a minimum, you’ll need to change the Docker &lt;code&gt;image&lt;/code&gt; URI to point at your new image, and change the &lt;code&gt;imagePullSecrets&lt;/code&gt; name if you’re storing the image in a private repository.&lt;/p&gt;

&lt;p&gt;Credentials for AWS and GCP are stored as Kubernetes secrets. The AWS credentials are set as environment variables on the container, which are automatically recognized and used by the AWS Kinesis sink for authentication. The GCP credentials are stored in a Kubernetes secret that is accessible from a file on the container filesystem; the &lt;code&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/code&gt; environment variable is used by the GCP PubSub sink to determine the location of the file containing application credentials.&lt;/p&gt;

&lt;p&gt;After deploying the Kafka Connect manifest to Kubernetes, you should see your Kafka Connect pod(s) running as part of a cluster. They’re not doing much at this point, but that’s fine!&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Run some Connectors!
&lt;/h3&gt;

&lt;p&gt;As mentioned earlier, connectors are administered through the Kafka Connect REST API. You can access this API via any of the Kafka Connect pods. This can be accomplished by sending REST calls to TCP port 8083 on a Kafka Connect pod.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example: GCP PubSub Sink
&lt;/h4&gt;

&lt;p&gt;In this example, we’re reading from the Kafka topic &lt;code&gt;demo-event-stream-export&lt;/code&gt; and writing to the GCP PubSub topic &lt;code&gt;mux-demo-event-stream-export&lt;/code&gt; in the project &lt;code&gt;external-mux&lt;/code&gt; using the &lt;a href="https://github.com/GoogleCloudPlatform/pubsub/tree/master/kafka-connector"&gt;GCP PubSub sink implementation&lt;/a&gt;. The connector will run 6 Kafka Connector tasks; each task works as a Kafka consumer, so it makes sense for the number of tasks to not exceed the number of partitions on the source topic (e.g. &lt;code&gt;demo-event-stream-export&lt;/code&gt;). The connector sink is identified by the &lt;code&gt;connector.class&lt;/code&gt; setting, which references the GCP sink implementation.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h4&gt;
  
  
  Example: AWS Kinesis Sink
&lt;/h4&gt;

&lt;p&gt;This example shows a connector that writes to a Kinesis sink using the &lt;a href="https://github.com/awslabs/kinesis-kafka-connector"&gt;AWS Kinesis sink implementation&lt;/a&gt;. Again, it’s reading from the Kafka topic &lt;code&gt;demo-event-stream-export&lt;/code&gt;. There are Kafka Connect standard configuration keys (e.g. &lt;code&gt;name&lt;/code&gt;, &lt;code&gt;topics&lt;/code&gt;, &lt;code&gt;connector.class&lt;/code&gt;, &lt;code&gt;tasks.max&lt;/code&gt;) intermingled with the sink configuration keys (e.g. &lt;code&gt;streamName&lt;/code&gt;, &lt;code&gt;roleARN&lt;/code&gt;, &lt;code&gt;ttl&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;This example shows how you can use IAM authentication and external-id’s to authenticate to a restricted role when accessing external resources, such as a Kinesis stream owned by another organization. It also shows how you can use the &lt;code&gt;kinesisEndpoint&lt;/code&gt; key to use an alternate endpoint, which is a popular way of using a VPC endpoint to avoid having your Kinesis-writes egress through a NAT, which can become quite costly.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Lessons We’ve Learned
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Make sure ‘CONNECT_REST_ADVERTISED_HOST_NAME’ is set
&lt;/h3&gt;

&lt;p&gt;The Kafka Connect pods communicate with each other via the REST API. This requires them to advertise themselves using a hostname or address that’s accessible by all of the other Kafka Connect pods.&lt;/p&gt;

&lt;p&gt;In Kubernetes deployments you should probably set the ‘CONNECT_REST_ADVERTISED_HOST_NAME’ environment variable to the pod IP. This is done in the Kubernetes deployment example above.&lt;/p&gt;

&lt;p&gt;Failure to set this correctly will yield errors any time you try to modify the connector configurations. Other Kafka Connect community &lt;a href="https://rmoff.net/2019/11/22/common-mistakes-made-when-configuring-multiple-kafka-connect-workers/"&gt;blog posts&lt;/a&gt; have addressed this specific issue, it’s a painful one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Set the ‘errors.retry.timeout’ value on your sinks
&lt;/h3&gt;

&lt;p&gt;Kafka Connect sinks can be configured to retry failed writes for a configurable duration with exponential backoff. However, the default behavior in Kafka Connect is to not retry writes on a worker task and immediately mark the worker task as failed. Once a task is failed, its source topic partition claims are reassigned to other tasks where they’ll be retried. This can lead to an imbalance of load across Kafka Connect pods. The only way to resolve this is by restarting the individual failed tasks.&lt;/p&gt;

&lt;p&gt;Here's an example of an imbalance of network &amp;amp; CPU load across Kafka Connect instances due to task failures. Nothing is wrong, per se, but the uneven load leads to inefficiency that could later result in lag:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TQT8MDkI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gifqhakl76ccuynrtgwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TQT8MDkI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gifqhakl76ccuynrtgwx.png" alt="imbalance"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ideally, the load should be even across all Kafka Connect instances:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jz1gbvpT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mu74yrgaxg6kgt1gmy7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jz1gbvpT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/mu74yrgaxg6kgt1gmy7j.png" alt="balance"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this reason, it’s highly recommended that you set the ‘errors.retry.timeout’ setting on each sink. In the case of AWS Kinesis sinks, it’s common to get spurious write-failures that immediately succeed upon retry. Setting the ‘errors.retry.timeout’ ensures that you’ll get the benefit of a retry while keeping the workload balanced. You should also monitor task failures as an early indicator of load imbalances.&lt;/p&gt;

&lt;h3&gt;
  
  
  Autoscaling is tricky
&lt;/h3&gt;

&lt;p&gt;We try to autoscale all of the services we run at Mux, adding or removing servers based on some observed set of metrics. For instance, when CPU or memory utilization rises above some predetermined threshold, the cluster should add enough servers to bring the load back down below the threshold.&lt;/p&gt;

&lt;p&gt;In our experience, Kafka Connect resource utilization has typically been constrained by CPU usage. Adding a server will cause a cluster to rebalance which, on its own, will temporarily drive up CPU utilization.&lt;/p&gt;

&lt;p&gt;If the autoscaling service (e.g. Kubernetes Horizontal Pod Autoscaler, or HPA) is not configured to support a cooldown or delay, then the addition of servers could cause the HPA to think that even more servers are needed to deal with the temporary spike in CPU utilization. This can lead to massive overprovisioning and thrashing in response to a moderate increase in load. In our experience, this is a general problem for Kafka consumers that are subject to stop-the-world Kafka consumer rebalances.&lt;/p&gt;

&lt;p&gt;One approach to this problem is to configure the autoscaler to use a stabilization-window during which no scaling decisions are made. &lt;a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior"&gt;Support for stabilization windows&lt;/a&gt; is present in the HPA in Kubernetes 1.18 or greater.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring
&lt;/h3&gt;

&lt;p&gt;We use Grafana to visualize Prometheus metrics for all services at Mux. The JMX Exporter plugin that we bundle with Kafka Connect is responsible for exposing Kafka Connect JMX metrics for scraping with Prometheus. We’ve created a Grafana Dashboard to present significant metrics in a single-page dashboard.&lt;/p&gt;

&lt;p&gt;Some of the useful metrics are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU utilization&lt;/li&gt;
&lt;li&gt;Memory utilization&lt;/li&gt;
&lt;li&gt;Topic consumer lag&lt;/li&gt;
&lt;li&gt;Records consumed by connector&lt;/li&gt;
&lt;li&gt;Sink batch average put-time (e.g. time to write)&lt;/li&gt;
&lt;li&gt;Task errors&lt;/li&gt;
&lt;li&gt;Task failures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have alerts on the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU utilization&lt;/li&gt;
&lt;li&gt;Memory utilization&lt;/li&gt;
&lt;li&gt;Kafka consumer fetch-record lag&lt;/li&gt;
&lt;li&gt;Task failures&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Kafka Connect has been an extremely useful tool for distributing Mux Data streams to customer stream-processing systems. The simplicity of deployment and the broad array of community-supported open-source integrations is unmatched!&lt;/p&gt;

&lt;p&gt;If you’re a &lt;a href="https://mux.com/data"&gt;Mux Data&lt;/a&gt; Enterprise customer, we encourage you to contact us to set up a data stream that can be used to react to viewer activity and trends in realtime!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>apachekafka</category>
      <category>kafkaconnect</category>
      <category>kafka</category>
    </item>
    <item>
      <title>A Beginner's Guide to Video File Formats: MP4s</title>
      <dc:creator>Bonnie Pecevich</dc:creator>
      <pubDate>Tue, 12 Jan 2021 19:50:47 +0000</pubDate>
      <link>https://dev.to/mux/a-beginner-s-guide-to-video-file-formats-mp4s-53ai</link>
      <guid>https://dev.to/mux/a-beginner-s-guide-to-video-file-formats-mp4s-53ai</guid>
      <description>&lt;p&gt;Note: Originally posted by my colleague on the &lt;a href="https://mux.com/blog/a-beginners-guide-to-video-file-formats/"&gt;Mux blog&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;One of my first projects as an engineer at Mux was to add a bit of validation logic to stream.new to block uploads of videos with extremely long durations (1+ hours). &lt;a href="https://stream.new/"&gt;stream.new&lt;/a&gt; is a simple web application the team built to both serve as an example application for users and to help us dogfood our own product. The site allows users to upload an input video file to Mux, where it is processed and encoded, then made easily streamable and shareable (you can also record directly from your camera or screen -- &lt;a href="https://mux.com/blog/stream-new-add-a-video-get-a-sharable-link-to-stream-it/"&gt;check it out&lt;/a&gt;!) Adding a client-side duration check is a commonly requested feature: imagine building an application where users upload videos up to a minute long (think TikTok). It would be a huge bummer to spend a bunch of time uploading a file just to be told at the end that your video was too long!&lt;/p&gt;

&lt;p&gt;As an &lt;a href="https://mux.com/blog/a-quick-intro-to-video-from-somebody-who-knows-nothing-about-video/"&gt;uninitiated beginner&lt;/a&gt; in the world of video, I did some preliminary research into the canonical method of determining a video file’s duration. After skimming through some disappointingly sparse search results, I posed the “simple” question in the team Slack channel. I then proceeded to do my best to follow a 45 minute minute conversation between six of my coworkers (with dozens of years of video experience between them) demonstrating just how non-trivial the problem actually is.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--udkLaTCp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/has2rz40zltov8j76vaq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--udkLaTCp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/has2rz40zltov8j76vaq.png" alt="0d8fda86ad6ea281a45d4a7e55095342af421069-400x355"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;As it turns out, video file formats are a bit more complex than I had previously thought. Fun! Accordingly, in the rest of this post I’ll be doing my best to provide a beginner’s introduction into how video file formats work -- no prior video experience required.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the sausage gets made
&lt;/h2&gt;

&lt;p&gt;Video files are generally serialized into files with a container format: multiple streams of data are encoded and multiplexed (or “muxed”, if you will) together in separate containers within a single file. These files hold multiple streams of data because the visual, audio, captions, etc. components are all stored separately and tied together by a shared playback timeline. Let’s discuss one of the most common and well-known file formats, MP4.&lt;/p&gt;

&lt;p&gt;Each container is called a box (or atom) and is a serialized array of bytes formatted with a prefix that specifies (1) the box type with a canonical four character label and (2) the serialized box’s length so a parser can know how far into the box to read. Boxes are hierarchical, meaning there can be multiple boxes within a box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DxIEje_p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yzx5lhbym42djgj76wfl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DxIEje_p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yzx5lhbym42djgj76wfl.jpg" alt="6151919658b2eae82466acbf59aaf74c052dc3ff-943x1060"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So mp4s are just a collection of boxes within boxes… what do they all mean?&lt;/p&gt;

&lt;p&gt;By convention, the first box in an mp4 file is an &lt;code&gt;ftyp&lt;/code&gt; box. This contains some high level metadata about the file, including the types of decoders that can be used on the rest of the file. “Decoders” are code that transforms serialized data into a signal that humans can understand, while codecs do the exact opposite.&lt;/p&gt;

&lt;p&gt;After the &lt;code&gt;ftyp&lt;/code&gt; box usually comes a &lt;code&gt;moov&lt;/code&gt; box. This is where, in my opinion, things start getting more interesting. The &lt;code&gt;moov&lt;/code&gt; box generally contains a few &lt;code&gt;trak&lt;/code&gt; boxes within it which provide reference information for interpreting the encoded data streams. For example, in a normal MP4 there might be a &lt;code&gt;trak&lt;/code&gt; for video (visual) and a &lt;code&gt;trak&lt;/code&gt; for audio. In addition to describing more details about the appropriate decoders for each stream, the &lt;code&gt;trak&lt;/code&gt; includes offset information into the rest of the file (basically serving like pointers to a video player) about where the encoded streams can be found. The actual encoded bitstreams are, in turn, contained within the following &lt;code&gt;mdat&lt;/code&gt; (Media Data) box! So to play back a video, the player would need to first load the &lt;code&gt;moov&lt;/code&gt; box to find the relevant offsets into the &lt;code&gt;mdat&lt;/code&gt; box for the audio and visual streams to start decoding them for a device’s physical output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bpdRRh7O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4b5dwh9wpr0xdxbhn4qi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bpdRRh7O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/4b5dwh9wpr0xdxbhn4qi.png" alt="mp4-vs-fmp4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming Video
&lt;/h2&gt;

&lt;p&gt;Things get fun when we start thinking about fragmented mp4s (fMP4s), which are used for streaming video. After all, if you’re serving a large video file you wouldn’t want to make viewers download the whole thing before they can start watching. Or, let’s say you want to seek to a specific location in the video (say, 11:42). If the video and audio streams were stored in two large, contiguous ranges of bytes then you would need to iterate through the whole thing to find your desired seek location. Instead, in a fMP4 file the &lt;code&gt;mdat&lt;/code&gt; box is segmented into pieces (conceptually, think of the video being chopped up every few seconds), and a &lt;code&gt;sidx&lt;/code&gt; box provides indexing information so a player knows which timestamps in the video correspond to which segmented &lt;code&gt;mdat&lt;/code&gt; boxes.&lt;/p&gt;

&lt;p&gt;Segmentation of the &lt;code&gt;mdat&lt;/code&gt; box also unlocks the benefit of &lt;strong&gt;adaptive bitrates&lt;/strong&gt;. Since all client bandwidths and user devices are not created equal, you might want to seamlessly deliver versions of the video with a higher or lower bitrate (think 240p, 480p, 1080p…) in different situations. With segmented video files, your player can detect bandwidth changes or slowdowns and switch bitrate accordingly. The original content is encoded into multiple versions with different bitrates when it is ingested, and the player can decide which one to iteratively download during playback. For example, if your user is watching something on a mobile device and they walk out of range of wifi, then you may want to switch from 1080p to 480p rather than subject the user to a poor viewing experience when their device begins to struggle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OQzVqk6k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a33k19w3e7zvijo5qtna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OQzVqk6k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/a33k19w3e7zvijo5qtna.png" alt="480vs1080"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Returning to the original problem at hand… so how do we find the duration of a video file? The short answer is that (at least for MP4s) you should be able to just inspect the header values stored in the file’s moov box. However, there is no guarantee that the header metadata is even accurate! Check out my co-worker Phil Cluff’s (aka “other Phil”) talk on this topic at Demuxed -- it turns out you can pretty easily munge/delete/edit boxes within the file even while [certain players] continue to play them normally! Therefore, to derive an authoritative answer for what the duration of a video is, one would actually need to parse and iterate through the entire contents of the file. (And that’s not even taking alternative formats into account!) That being said, in general, the header information should serve as a reasonable signal of the video’s length. For our use case of a simple preliminary check, we made do in stream.new with loading the file as a video element and checking its metadata -- a simple change of just a few lines of code.&lt;/p&gt;

&lt;p&gt;Hopefully this was a helpful introduction to the world of video formats! If you’re interested in learning more about how playback works, check out &lt;a href="https://howvideo.works/"&gt;howvideo.works&lt;/a&gt;. If you’re itching to just get started with streaming video, go to &lt;a href="https://mux.com/"&gt;mux.com&lt;/a&gt;. Thanks for reading!   &lt;/p&gt;

</description>
      <category>video</category>
      <category>fileformats</category>
      <category>mp4s</category>
    </item>
    <item>
      <title>The state of going live from a browser</title>
      <dc:creator>Matt McClure</dc:creator>
      <pubDate>Mon, 20 Apr 2020 16:30:35 +0000</pubDate>
      <link>https://dev.to/mux/the-state-of-going-live-from-a-browser-13m8</link>
      <guid>https://dev.to/mux/the-state-of-going-live-from-a-browser-13m8</guid>
      <description>&lt;p&gt;Publishing a live stream directly from a browser feels like it &lt;em&gt;must&lt;/em&gt; be one of those solved problems. Watching live video in a browser is so common these days it's hard to imagine a time when it required proprietary plugins to even have a chance of working. Even video &lt;em&gt;communication&lt;/em&gt; feels trivial now thanks to modern browser features like WebRTC. The "trivial" part is only really true if you're using two browser windows on the same machine, but still, it's you on video! Twice!&lt;/p&gt;

&lt;p&gt;So as a web developer looking at all this video successfully being sent and played back by the browser, it's totally reasonable to think that publishing a live broadcast directly from a browser would be easy. All the building blocks are here, there's surely an npm package that ties it all together for publishing to sources like Mux, Facebook, YouTube Live, Twitch, etc...&lt;/p&gt;

&lt;h2&gt;
  
  
  That's gonna be a no from browsers, dawg.
&lt;/h2&gt;

&lt;p&gt;Unfortunately that's simply not the case. There's no reasonable way to publish a live broadcast directly from a browser. It's possible to capture the video and eventually get it there, but you're almost always going to need to get a server involved.&lt;/p&gt;

&lt;p&gt;One of the big reasons for this is that the industry standard for publishing live streams is &lt;a href="https://en.wikipedia.org/wiki/Real-Time_Messaging_Protocol"&gt;RTMP&lt;/a&gt;, which is a protocol browsers simply aren't able to natively speak. We've written about the options out there for &lt;a href="https://mux.com/blog/guide-to-rtmp-broadcast-apps-for-ios/"&gt;native mobile applications&lt;/a&gt;, and the desktop has fantastic, open tools like the &lt;a href="https://obsproject.com/"&gt;OBS project&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why go live from the browser?
&lt;/h1&gt;

&lt;p&gt;One of the most common reasons is simply due to friction. If you're building a live streaming solution and you want your customers to be able to go live as easily as possible, asking them to leave your service to go figure out some other piece of desktop software is a big ask.&lt;/p&gt;

&lt;p&gt;On top of that, the tools out there for live streaming are complex in their own right. OBS Studio, for example, is an &lt;em&gt;incredibly&lt;/em&gt; powerful and flexible tool, but that comes with the cost of being a daunting piece of software for the unfamiliar. Even with guides and tools out there to help users get set up, you're now supporting not only your service, but whatever tools your streamers end up using.&lt;/p&gt;

&lt;p&gt;If you're already building a web app there's a good chance your team is good at...well building web apps. Building your go-live dashboard directly into your browser application would allow you to continue to utilize the expertise of your team, giving end-users a low-friction, branded experience that doesn't require them to learn anything but &lt;em&gt;your&lt;/em&gt; application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Before we go on...
&lt;/h2&gt;

&lt;p&gt;Yes, for all of the reasons just mentioned, it's easy to see why it's so tempting, but going live directly from the browser is almost certainly going to be a worse experience for everyone involved. The quality will be worse, the stream less reliable, and the tooling more limited. Your streamers and your viewers are all probably better off if the broadcast is done from a native application.&lt;/p&gt;

&lt;h1&gt;
  
  
  Ok cool, now let's talk about our options.
&lt;/h1&gt;

&lt;p&gt;We're going to talk about 3 high-level approaches to going live from the browser. By "going live," what we're specifically referring to is getting video from a streamer's browser to a broadcast endpoint via RTMP. Spoiler alert: all three of the approaches we're going to discuss are related, and two of them are essentially the same workflow with a twist. There are probably other options out there, but these are the closest to production ready you'll find.&lt;/p&gt;

&lt;h2&gt;
  
  
  WebRTC rebroadcasting
&lt;/h2&gt;

&lt;p&gt;Most commonly, WebRTC is known as the technology that lets web developers to build live video chat into the browser. That's true, but it actually goes much further than that. WebRTC is made up of standards that allow for peer-to-peer Web applications that can transmit audio, video, or even just arbitrary data without the need for plug-ins or technically even servers[1].&lt;/p&gt;

&lt;p&gt;A quick aside, a fellow Muxologist, Nick Chadwick, &lt;a href="https://www.youtube.com/watch?v=ZlQfWs_XTvc"&gt;gave a talk&lt;/a&gt; on WebRTC → RTMP at AllThingsRTC in 2019. He goes much deeper into the underlying protocols in that talk than we are here, so if you're interested in the nitty gritty details, that one's highly recommended.&lt;/p&gt;

&lt;p&gt;Given the well-documented path to video teleconferencing that WebRTC provides, the most common solution that people immediately gravitate towards is what's called "rebroadcasting." A server implements the WebRTC API to become a peer, then takes the video feed and publishes it via RTMP. &lt;/p&gt;

&lt;p&gt;This approach is, to put it simply, difficult. The good news is, that path has gotten a little easier in recent months, with projects like &lt;a href="https://pion.ly/"&gt;Pion&lt;/a&gt; maturing and higher level tools like &lt;code&gt;node-webrtc&lt;/code&gt; adding support for &lt;a href="https://github.com/node-webrtc/node-webrtc-examples/tree/master/examples/record-audio-video-stream"&gt;accessing actual video frames&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Broadcasting headless Chrome
&lt;/h2&gt;

&lt;p&gt;Nick also mentions this approach in his talk (and &lt;a href="https://github.com/muxinc/chromium_broadcast_demo"&gt;built an example&lt;/a&gt;), but another approach is to simply bypass server-side implementations altogether and use the one that's arguably the most battle-tested and has a wide selection of open-source tooling: Chrome. Yes, that one, the browser.&lt;/p&gt;

&lt;p&gt;Thanks to projects like &lt;a href="https://github.com/puppeteer/puppeteer"&gt;Puppeteer&lt;/a&gt;, the process of programmatically interacting with a headless Chrome instance is pretty straightforward. From there you can build a normal WebRTC experience and use &lt;code&gt;ffmpeg&lt;/code&gt; to broadcast whatever's in your headless Chrome instance via RTMP.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;huge&lt;/em&gt; benefit of this approach is that it allows the developer to effectively build any experience in the user interface. Stream overlays, multiple speakers on a call, video effects, whatever you could build with canvas or the DOM would Just Work™ since it's...well, it's a browser. It's also not &lt;em&gt;that&lt;/em&gt; much additional work on top of building out normal, peer-to-peer chat for that reason.&lt;/p&gt;

&lt;p&gt;The downside of this approach is that you need to have a Chrome instance for every streamer. If you're just looking to stream yourself this isn't a huge issue, but if you're looking to support an arbitrary number of streamers this could become problematic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Video over WebSockets
&lt;/h2&gt;

&lt;p&gt;This one is the simplest and, in my opinion, the most fun to hack around on. Yes, as promised, this solution also uses at least one piece of the WebRTC toolchain, &lt;code&gt;getUserMedia()&lt;/code&gt; (the way you request access to the browser's mic and camera). However, once you have the media, instead of delivering the media via WebRTC's protocols, you use the &lt;code&gt;MediaRecorder&lt;/code&gt; API.&lt;/p&gt;

&lt;p&gt;This allows for similar flexibility to the headless Chrome example: you can render the user's camera to a canvas element and manipulate the video however you'd like there. The &lt;code&gt;MediaRecorder&lt;/code&gt; will fire an event every time it has a "chunk" of video data ready, at which point you send it to the server via the websocket as a binary blob. The server then listens for these data chunks and pipes them into a running &lt;code&gt;ffmpeg&lt;/code&gt; command as they're received.&lt;/p&gt;

&lt;p&gt;The benefit to this approach is that it's much closer to "traditional" applications in terms of running and scaling. You need a persistent WebSocket connection with each streamer, yes, but the requirements of each stream are actually pretty low since we've got &lt;code&gt;ffmpeg&lt;/code&gt; doing as little as possible before publishing the RTMP stream. In fact, this &lt;a href="https://github.com/mmcc/next-streamr"&gt;example application using Next.js&lt;/a&gt; runs just fine on a &lt;a href="https://mmcc-next-streamr.glitch.me/"&gt;Glitch server&lt;/a&gt;. Let's talk about how it works.&lt;/p&gt;


&lt;div class="glitch-embed-wrap"&gt;
  &lt;iframe src="https://glitch.com/embed/#!/embed/mmcc-next-streamr?path=index.html" alt="mmcc-next-streamr on glitch"&gt;&lt;/iframe&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  The Client
&lt;/h3&gt;

&lt;p&gt;For the example we used a &lt;a href="https://reactjs.org/"&gt;React&lt;/a&gt; framework called &lt;a href="https://nextjs.org/"&gt;Next.js&lt;/a&gt; with a custom Node.js server. &lt;/p&gt;

&lt;p&gt;Before the client can do anything, it needs to request access to the user's camera and microphone by calling &lt;code&gt;getUserMedia&lt;/code&gt; with the requested constraints. Calling this function will prompt the browser to ask the end-user if they'd like to share the requested resources.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// This would just ask for access to audio and video, but you can also specify
// what resolution you want from the video if you'd like.
const cameraStream = await navigator.mediaDevices.getUserMedia({
  audio: true,
  video: true
});
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The call to &lt;code&gt;getUserMedia&lt;/code&gt; returns a promise. which (if the user agrees) will resolve and return the camera stream. That camera stream can then be set as the &lt;code&gt;srcObject&lt;/code&gt; of a video tag, at which point you've got the webcam playing back in the browser window!&lt;/p&gt;

&lt;p&gt;From here, what we're doing in the demo is rendering that video stream to a canvas element using a very similar technique to what we described in our &lt;a href="https://mux.com/blog/canvas-adding-filters-and-more-to-video-using-just-a-browser/"&gt;blog post on manipulating video via the canvas element&lt;/a&gt;. Once we're copying the video over to the canvas element, we can capture that stream, and initialize a new &lt;code&gt;MediaRecorder&lt;/code&gt; instance.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const mediaStream = canvasEl.captureStream(30); // 30 frames per second
const mediaRecorder = new MediaRecorder(mediaStream, {
  mimeType: 'video/webm',
  videoBitsPerSecond: 3000000
});
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The new MediaRecorder object will fire an event every time a blob is ready (&lt;code&gt;ondataavailable&lt;/code&gt;). We can listen for that event, and when we receive it send the data blob right down an open WebSocket connection.&lt;/p&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Listen for the dataavailable event on our mediaRecorder instance&lt;br&gt;
mediaRecorder.addEventListener('dataavailable', e =&amp;gt; {&lt;br&gt;
  ws.send(e.data); // Then send the binary data down the website!&lt;br&gt;
}); &lt;br&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  The Server&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;The server listens for incoming WebSocket connections, and when a new one is created it initializes a new &lt;code&gt;ffmpeg&lt;/code&gt; process that's streaming to the specified RTMP endpoint. Whenever a new chunk of video comes in via a message, the server pipes that received data to the &lt;code&gt;ffmpeg&lt;/code&gt; process, which in turn broadcasts it via RTMP.&lt;/p&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;webSocketServer.on('connection', (ws) =&amp;gt; {;&lt;br&gt;
  // When a new connection comes in, spawn a new &lt;code&gt;ffmpeg&lt;/code&gt; process&lt;br&gt;
  const ffmpeg = child_process.spawn('ffmpeg', [&lt;br&gt;
    // ... ffmpeg settings ...
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// final argument should be the output, 
// which in this case is our RTMP endpoint
`rtmps://global-live.mux.com/app/${STREAM_KEY}`,
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;]);&lt;/p&gt;

&lt;p&gt;// If our ffmpeg process goes away, end the WebSocket connection&lt;br&gt;
  ffmpeg.on('close', (code, signal) =&amp;gt; {&lt;br&gt;
    ws.terminate();&lt;br&gt;
  });&lt;/p&gt;

&lt;p&gt;ws.on('message', (msg) =&amp;gt; {&lt;br&gt;
    // If we're using this WebSocket for other messages, check &lt;br&gt;
    // and make sure before piping it to our ffmpeg process&lt;br&gt;
    if (Buffer.isBuffer(msg)) {&lt;br&gt;
      ffmpeg.stdin.write(msg);&lt;br&gt;
    }&lt;br&gt;
  });&lt;/p&gt;

&lt;p&gt;// If the WebSocket connection goes away, clean up the ffmpeg process&lt;br&gt;
  ws.on('close', (e) =&amp;gt; {&lt;br&gt;
    ffmpeg.kill('SIGINT');&lt;br&gt;
  });&lt;br&gt;
});&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;h3&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Profit! Kinda.&lt;br&gt;
&lt;/h3&gt;

&lt;p&gt;It works! It's fun and fairly simple, with both code and client coming in at &amp;lt; 300 lines of code. It's got the advantage of being easy to interact with the outgoing stream, and it's quick and easy to hack on. You can give it a try now, just go remix the Glitch, specify your own Mux stream key, and try it out.&lt;/p&gt;

&lt;p&gt;However, there are huge drawbacks to the Javascript side of things. For example, modern browsers will de-prioritize the timers on a tab that isn't front-and-center, meaning if the streamer switches to a different tab, the streaming page won't send chunks of video fast enough and eventually the stream will stall. There are ways to ensure that doesn't happen, but most of them will require at least some participation from your streamer.&lt;/p&gt;

&lt;h1&gt;
  
  
  Let us help your users go live!
&lt;/h1&gt;

&lt;p&gt;Unless you have a lot of resources to devote for building out an application around going live from the browser we suggest providing your users other tried and true native options or pointing them towards one of the fantastic paid browser options. That being said, we're here to help! If you want help figuring out the best way to let users go live in your application, please reach out.&lt;/p&gt;

&lt;p&gt;[1]: Yes, in practice most applications would want a server for connection negotiation and more, but &lt;em&gt;technically&lt;/em&gt; a simple application could allow users to share the required details via another method.&lt;/p&gt;

</description>
      <category>react</category>
      <category>webrtc</category>
      <category>video</category>
    </item>
    <item>
      <title>Mux is the video API for the JAMstack</title>
      <dc:creator>Dylan Jhaveri</dc:creator>
      <pubDate>Wed, 08 Apr 2020 16:28:01 +0000</pubDate>
      <link>https://dev.to/mux/mux-is-the-video-api-for-the-jamstack-3po1</link>
      <guid>https://dev.to/mux/mux-is-the-video-api-for-the-jamstack-3po1</guid>
      <description>&lt;h1&gt;
  
  
  What is the JAMstack?
&lt;/h1&gt;

&lt;p&gt;The JAMstack is a term popularized in the last year, largely by the React community and companies like &lt;a href="https://www.netlify.com/" rel="noopener noreferrer"&gt;Netlify&lt;/a&gt; and &lt;a href="https://zeit.co/" rel="noopener noreferrer"&gt;Zeit&lt;/a&gt;. Specifically, JAMstack stands for "Javascript", "APIs" and "Markup". These terms don't exactly describe what the JAMstack is in a clear way, but the name itself has a nice ring to it so it seems to have stuck.&lt;/p&gt;

&lt;p&gt;Here is a breakdown of all the pieces for a "JAMstack" application and what some of the popular options are. For a more exhaustive list you might check out &lt;a href="https://github.com/automata/awesome-jamstack" rel="noopener noreferrer"&gt;awesome-jamstack on Github&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Static content frameworks
&lt;/h2&gt;

&lt;p&gt;This covers the "Javascript" and "Markup" part of the stack.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://nextjs.org/" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt;: Open source, write everything with React and the framework gives you automatic code splitting and a server-side rendered web application.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.gatsbyjs.com/" rel="noopener noreferrer"&gt;Gatsby&lt;/a&gt;: Also open source and you write everything with React components. The Gatsby framework handles code splitting and lazy loading resources. Gatsby also has a concept of “sources” where you can write GraphQL queries to pull in data from 3rd party sources via a plugin.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.11ty.dev/" rel="noopener noreferrer"&gt;11ty&lt;/a&gt;: A static site generator that works with all kinds of templates: markdown, liquid templates, nunjucks, handlebars, mustache, ejs, haml, pug and Javascript template literals&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploy
&lt;/h2&gt;

&lt;p&gt;These are platforms that can host your statically built application. With common JAMstack frameworks you end up with static files that can be hosted by a static file server and delivered over a CDN.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://zeit.co/" rel="noopener noreferrer"&gt;Zeit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.netlify.com/" rel="noopener noreferrer"&gt;Netlify&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://firebase.google.com/docs/hosting" rel="noopener noreferrer"&gt;Firebase hosting&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://surge.sh/" rel="noopener noreferrer"&gt;Surge.sh&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://render.com/" rel="noopener noreferrer"&gt;Render&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;AWS S3&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cloud Functions (“Serverless”)
&lt;/h2&gt;

&lt;p&gt;All of these services, in one way or another, allow you to write code in javascript that handles an API request and returns a response. This, along with other 3rd party APIs is the "API" part of the stack. The serverless part is that you don’t have to worry about the details on how or where that code gets run. These platforms will handle the server configuration and the deployment of your API endpoints as “cloud functions” or “lambdas”. In your client side application, you can make requests to these functions the same way you would make requests to API endpoints that you would have deployed to your own traditional server.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://firebase.google.com/docs/functions/" rel="noopener noreferrer"&gt;Firebase Cloud Functions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://workers.cloudflare.com/" rel="noopener noreferrer"&gt;Cloudflare Workers&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nextjs.org/docs/api-routes/introduction" rel="noopener noreferrer"&gt;Zeit API Routes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.netlify.com/functions/overview/" rel="noopener noreferrer"&gt;Netlify Functions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Headless CMS
&lt;/h2&gt;

&lt;p&gt;A “headless” CMS is a CMS that gives you and your team an interface to log in, edit content, add new content, upload assets and the “publish” data that makes it into your website or application.&lt;/p&gt;

&lt;p&gt;There are many headless CMSes. We are a little biased, so these are the one ones that work with Mux and these are the ones that we have used. Look around for what works for you. And if you have one that you want to use with Mux, let us know and we can build an integration.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.sanity.io/" rel="noopener noreferrer"&gt;Sanity&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.contentful.com/" rel="noopener noreferrer"&gt;Contentful&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.datocms.com/" rel="noopener noreferrer"&gt;Dato&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cosmicjs.com/" rel="noopener noreferrer"&gt;Cosmic JS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Authentication (advanced)
&lt;/h2&gt;

&lt;p&gt;If you’re building a static marketing site you probably will not need to deal with authentication. However, for a more advanced application you will need to have users login, reset passwords and do all the pieces of authentication.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://auth0.com/" rel="noopener noreferrer"&gt;Auth0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://firebase.google.com/docs/auth" rel="noopener noreferrer"&gt;Firebase auth&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.netlify.com/visitor-access/identity/" rel="noopener noreferrer"&gt;Netlify Identity&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Database (advanced)
&lt;/h2&gt;

&lt;p&gt;If you are authenticating users and dealing with logged in sessions, you probably need a database. These are commonly used for JAMstack applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://firebase.google.com/" rel="noopener noreferrer"&gt;Firebase&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fauna.com/" rel="noopener noreferrer"&gt;FaunaDB&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  How did we get here?
&lt;/h1&gt;

&lt;p&gt;Before these tools gained popularity the answer to “What stack should I use for my marketing site?” might have been “use Rails” and that is a clear answer. But now if someone says “use the JAMstack” well, that is a complicated answer. It’s a little misleading to call the “JAMstack” a specific stack, because as you can see from above, even if you decided to use the JAMstack, you still have a lot of choices to make.&lt;/p&gt;

&lt;p&gt;Before the JAMstack was popularized, we have had a long history of static site generators. You may remember &lt;a href="https://jekyllrb.com/" rel="noopener noreferrer"&gt;Jekyl&lt;/a&gt; or &lt;a href="https://middlemanapp.com/" rel="noopener noreferrer"&gt;Middleman&lt;/a&gt; from the Ruby community. These tools allowed you to write Markdown, Liquid or Ruby’s ERB templates and generate a static site that you could host somewhere like s3 to host your blog. These tools are &lt;em&gt;&lt;em&gt;great&lt;/em&gt;&lt;/em&gt; and they are still widely used.&lt;/p&gt;

&lt;p&gt;These static site generators were great for developers that wanted to make something like a blog or a simple marketing website. Someone non-technical might reach for a tool like Wordpress or Squarespace, whereas a hacker would turn to a static site generator.&lt;/p&gt;

&lt;p&gt;For more advanced applications that went beyond statically rendered HTML, we had to switch gears away from static site generators and into a web framework like Rails.&lt;/p&gt;

&lt;p&gt;Then advanced frontend frameworks for building interactive single page applications became popular: Angular, Ember and React. Suddenly, frontend developers had all these tools and got comfortable writing React code for their applications. But for static marketing sites we couldn’t write React or Angular code because we still needed static HTML for SEO purposes and fast initial load times. Developers were stuck in a world where we wrote what we were comfortable with for our application frontend but then for our marketing site had to switch back to some ad-hoc cobbled together jQuery functions.&lt;/p&gt;

&lt;p&gt;The biggest feature that made the JAMstack popular is that you get the best of both worlds: server-side rendered HTML &lt;em&gt;plus&lt;/em&gt; interactive React components that you can do whatever you want with. This is the big innovation and the first “oh wow” moment I had using both Next.js and Gatsby. You write normal React like you’re used to, run the build process and then all of a sudden you end up with static HTML returned by the server and all your interactive React code works as you would expect.&lt;/p&gt;

&lt;h1&gt;
  
  
  Video for the JAMstack
&lt;/h1&gt;

&lt;p&gt;Mux is the video API for the JAMstack. The philosophy behind Mux and how we approach video fits in neatly with the JAMstack philosophy. Mux will act as your video infrastructure by handling the storage, hosting and delivery of your video without getting in the way or being opinionated about the presentation.&lt;/p&gt;

&lt;p&gt;In fact, Mux does not even give you a video player. You have to bring your own player to the party. The entire “frontend” of the video experience is up to you, Mux is focused on handling the backend or the “serverless” part of your video stack. Think of Mux as the headless video platform. You control every bit of the user experience while Mux does the heavy lifting behind the scenes.&lt;/p&gt;

&lt;h1&gt;
  
  
  JAMstack at Mux
&lt;/h1&gt;

&lt;p&gt;In addition to providing APIs that you can use for your JAMstack website, Mux also uses the JAMstack ourselves to power our marketing site (mux.com) and the Mux blog.&lt;/p&gt;

&lt;p&gt;A couple of months ago we finished the process of moving the Mux Blog to the JAMstack. Before this project, the Mux blog was hosted and deployed separately from mux.com. The blog was powered by an old version of Ghost, using the default Casper theme. Our marketing site is a Gatsby site that uses gatsby-source-filesystem to create some pages from markdown and gatsby-source-airtable to pull in some data from Airtable.&lt;/p&gt;

&lt;p&gt;The main issue with our existing blog that we wanted to address was that since we were using a Ghost theme, not only was the design of the blog completely different from the design of the rest of our marketing website, but it was an entirely different application with a different structure, hosting and deploy process.&lt;/p&gt;

&lt;p&gt;As a result, visitors that landed on a blog post didn’t have an easy way to get back to the main marketing site and since the look and feel didn’t exactly line up, the experience was too disconnected. We decided that we wanted to move everything to a headless CMS so that we could make the blog part of our existing Gatsby marketing site for consistency.&lt;/p&gt;

&lt;h1&gt;
  
  
  Migrating to a headless CMS
&lt;/h1&gt;

&lt;p&gt;There are pre-built Mux integrations for &lt;a href="https://www.sanity.io/" rel="noopener noreferrer"&gt;Sanity&lt;/a&gt;, &lt;a href="https://www.contentful.com/" rel="noopener noreferrer"&gt;Contentful&lt;/a&gt;, and &lt;a href="https://www.cosmicjs.com/" rel="noopener noreferrer"&gt;Cosmic&lt;/a&gt;. All of these options allow you to bring your own  Mux account. Alternatively, &lt;a href="https://www.datocms.com/" rel="noopener noreferrer"&gt;Dato&lt;/a&gt; is a headless CMS that offers native video built into the product that is &lt;a href="https://www.datocms.com/blog/why-we-chose-mux-for-datocms" rel="noopener noreferrer"&gt;powered by Mux&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We ended up choosing Sanity as our headless CMS. We loved that Sanity felt like an open-ended developer product that could grow with our needs past just the blog today. Calling Sanity a headless CMS sells it short from what it really is: it’s more akin to a structured, real-time database. The CMS part is all open source and in your control for how you want things to look and work. The way to think about it is that Sanity provides a real-time database along with some low-level primitives to define your data model, then from there, you build your own CMS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh2vy1p9mo4kzbvwyjn4f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fh2vy1p9mo4kzbvwyjn4f.png" alt="Mux Sanity CMS editor"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a part of this project of moving the blog to a new CMS, we wanted to set ourselves up with a headless CMS that could be used beyond just the blog and could also create a variety of pages on mux.com and allow us to move existing content like the &lt;a href="https://mux.com/video-glossary/" rel="noopener noreferrer"&gt;video glossary&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For a more technical in-depth read about how we did this, check out this Sanity Guide we wrote &lt;a href="https://www.sanity.io/guides/how-to-migrate-your-html-blog-content-from-ghost" rel="noopener noreferrer"&gt;How to migrate your HTML blog-content from Ghost&lt;/a&gt; and the blog post &lt;a href="https://www.sanity.io/blog/moving-the-mux-blog-to-the-jamstack" rel="noopener noreferrer"&gt;Moving the Mux blog to the JAMstack&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>gatsby</category>
      <category>serverless</category>
    </item>
    <item>
      <title>How to Host Your Own Online Conference</title>
      <dc:creator>Dylan Jhaveri</dc:creator>
      <pubDate>Tue, 03 Mar 2020 00:01:30 +0000</pubDate>
      <link>https://dev.to/mux/how-to-host-your-own-online-conference-dk0</link>
      <guid>https://dev.to/mux/how-to-host-your-own-online-conference-dk0</guid>
      <description>&lt;p&gt;Online conferences seem to have gained in popularity the past year. We’re seeing more and more folks reach out with questions around best practices and how to pull off their own remote conference. Sometimes, people will announce a conference, get thousands of sign ups and then one or two weeks before it’s set to go live they will be figuring out how to do it; how much different from a normal video conference call could it be? For our purposes let’s say you want to build a custom experience and host an online conference on your own website.&lt;/p&gt;

&lt;p&gt;Use this guide as your playbook. This is what we will cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have multiple presenters that will be broadcasting live from different locations.&lt;/li&gt;
&lt;li&gt;Your team has no video expertise at all, but you do have a technical team that is capable of building a functioning web application.&lt;/li&gt;
&lt;li&gt;The experience you want to provide to your audience is something custom that you control. Your brand is important for you and you want to control the look, feel and user experience of the conference.&lt;/li&gt;
&lt;li&gt;To broaden the reach of your conference you simultaneously want to broadcast out the video feed to social channels like Youtube Live, Facebook Live and Periscope.&lt;/li&gt;
&lt;li&gt;If possible, you would really like to have a branded overlay with your logo on the video.&lt;/li&gt;
&lt;li&gt;In addition to streaming live, you want to record the broadcast so that people who did not attend live are able to view the recordings on-demand as soon as each session is over.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ctpGo_fv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zubcb6ssj18r9p5vg7tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ctpGo_fv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zubcb6ssj18r9p5vg7tf.png" alt="online conference diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The basic structure of your setup is going to be a live conversation that is broadcast to a larger group of live viewers. The live conversation could be something like one person presenting with a screen share, or one person interviewing someone else, or a panel discussion among a group of experts.&lt;/p&gt;

&lt;p&gt;A really simple way to do this live conversation is to use Zoom. Most people are familiar with Zoom. It is one of the most stable, reliable and high quality pieces of meeting software around and it runs natively on your desktop. What’s really cool about Zoom is that if you enable live streaming for meetings then you can set up an RTMP output for your Zoom call to any arbitrary RTMP endpoint (this is where Mux comes in).&lt;/p&gt;

&lt;p&gt;Adding Mux in the middle is how you can broadcast your Zoom call to an audience of thousands on your own website. The live audience does not have to download Zoom, they do not interact with Zoom at all. All they do is see a video player that you make on your website.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--swhm0C-2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/30xj82zriibyz5h1a6p5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--swhm0C-2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/30xj82zriibyz5h1a6p5.png" alt="Live conference with Mux"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s break down the steps and API calls:
&lt;/h2&gt;

&lt;p&gt;1) Make sure in Zoom settings you have allowed meetings to be live streamed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zOX-c9jh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/58jcspgx91w4s4632fu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zOX-c9jh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/58jcspgx91w4s4632fu2.png" alt="Zoom allow live"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2) &lt;a href="https://docs.mux.com/reference#create-a-live-stream"&gt;Create a Mux live stream&lt;/a&gt; - this is one API call and for every live stream you create you will get back a unique stream key also make sure you save a &lt;code&gt;playback_id&lt;/code&gt; for this live stream - you will need this to play the live stream.&lt;/p&gt;

&lt;p&gt;3) Set up a Zoom call like you normally would. From the call, click the 3 dots at the bottom where it says "More" and click "Live on Custom Live Streaming Service".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lD5gvX3A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2hdsjod507zltt0pwmry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lD5gvX3A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2hdsjod507zltt0pwmry.png" alt="Zoom enable live stream"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4) From here, enter the RTMP server details for Mux.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VTCVSJOC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/09fqxzbl1v5b4qyer0ir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VTCVSJOC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/09fqxzbl1v5b4qyer0ir.png" alt="Zoom enter RTMP details"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5) When you’re ready to stream click “Go Live!” from Zoom. Now your Zoom call will be live and Mux will start receiving the video and audio. To confirm that this part is working, navigate to the live stream in your Mux dashboard and you should see video and audio coming in. Later you can set up &lt;a href="https://docs.mux.com/docs/live-streaming#section-broadcasting-webhooks"&gt;Webhooks&lt;/a&gt; so that you can be notified when every live stream is connected, active, completed, etc.&lt;/p&gt;

&lt;p&gt;6) Next is the player side, you have the &lt;code&gt;playback_id&lt;/code&gt; from step 3, right? You will need to take that ID and form a URL like this: &lt;code&gt;https://stream.mux.com/{playback-id}.m3u8&lt;/code&gt;. This is an “m3u8” URL which is a URL for streaming video over HLS. HLS is a standard streaming format for both live and on-demand video. You will need to use this HLS URL in a video player. Which player you choose is entirely up to you. Here’s two free ones you can check out to get started: &lt;a href="https://videojs.com/"&gt;videojs&lt;/a&gt; and &lt;a href="https://plyr.io/"&gt;plyr&lt;/a&gt;. With whichever player you choose, follow the instructions for streaming a HLS video.&lt;/p&gt;

&lt;p&gt;Note that HLS streaming will come with some latency. Expect for 15-20 seconds, there is a &lt;code&gt;reduced_latency&lt;/code&gt; flag that you can use which will bring that number down closer to 8 or 10 seconds (with some tradeoffs). Check the &lt;a href="https://docs.mux.com/reference#create-a-live-stream"&gt;docs here&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Live chat
&lt;/h2&gt;

&lt;p&gt;After you get the live stream of your conference working on your webpage, you will almost certainly want to add a live chat component. Live chat is outside the scope of what Mux offers, but we’ve seen this done in a few different ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use your own database and push real time changes to your clients with something like &lt;a href="https://pusher.com/"&gt;Pusher&lt;/a&gt;, &lt;a href="https://www.pubnub.com/"&gt;PubNub&lt;/a&gt; or &lt;a href="https://socket.io/"&gt;Socket.IO&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Use a realtime database like Firebase and build your own chat experience.&lt;/li&gt;
&lt;li&gt;Try something like &lt;a href="https://getstream.io/"&gt;Stream&lt;/a&gt; which offers real time APIs specifically around creating chat experience. Fully featured with things like uploading images to chat, emoji reactions, typing notifications and all the bells and whistles are built in.&lt;/li&gt;
&lt;li&gt;Skip the step of adding chat on your webpage and create a Slack community where everyone can chat. The benefit of having a slack community is that it’s free and you don’t have to go through the steps of building chat onto your website. Most people are already familiar with Slack, so they likely have it downloaded already. You can also create channels for specific topics and allow attendees to DM each other. This has the added benefit of allowing for the kind of attendee networking that happens at in-person conferences.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Everything is recorded
&lt;/h2&gt;

&lt;p&gt;Every live stream that you do will be recorded by Mux and the asset will be available for on-demand playback. After each live stream is over, you can use the Mux asset to give attendees who were not present live the ability to view the recording.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: simulcast to the socials
&lt;/h2&gt;

&lt;p&gt;When you &lt;a href="https://docs.mux.com/reference#create-a-live-stream"&gt;create a Mux live stream&lt;/a&gt; you can optionally add &lt;code&gt;simulcast_targets&lt;/code&gt; which are arbitrary RTMP endpoints that Mux will push out your stream to. It’s fairly straightforward and we have some guides on how to do this. Read more in &lt;a href="https://mux.com/blog/seeing-double-let-your-users-simulcast-a-k-a-restream-to-any-social-platform/"&gt;the announcement blog post&lt;/a&gt; and &lt;a href="https://mux.com/blog/help-your-users-be-in-5-places-at-once-your-guide-to-simulcasting/"&gt;the guide&lt;/a&gt;. All you really have to do is track down the RTMP server URL and stream keys for each of the social networks you want to broadcast to and add them to the Mux live stream with an API call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: add a watermark
&lt;/h2&gt;

&lt;p&gt;In the &lt;code&gt;new_asset_settings&lt;/code&gt; parameter when you create the live stream, you have the option to specify a watermark. You give Mux the URL to an image you want to use as a watermark and some details about where to place it and how to align it. When this is configured, Mux will add it to the stream that comes from Zoom and the watermark will appear on the HLS stream that you show on your website.&lt;/p&gt;

&lt;h2&gt;
  
  
  You don’t have to use Zoom
&lt;/h2&gt;

&lt;p&gt;Zoom is the simple example I used here because many people are familiar with it and know how it works. But the reality is you can use &lt;em&gt;&lt;em&gt;any software&lt;/em&gt;&lt;/em&gt; that allows you to RTMP out to Mux. To name a few other options: &lt;a href="https://obsproject.com/"&gt;OBS&lt;/a&gt;, &lt;a href="https://www.telestream.net/wirecast/"&gt;Wirecast&lt;/a&gt;, &lt;a href="https://www.ecamm.com/mac/ecammlive/"&gt;ecamm live&lt;/a&gt;, all of these products are built to compose a single video stream and RTMP out to somewhere. All of these will work with Mux.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Setup
&lt;/h2&gt;

&lt;p&gt;Here’s what your final setup might look like. If you are going to host an online conference with Mux, please reach out! We would love to talk to you and help you make sure it’s successful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NQGzmIKt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fx3fbjk4jc7326eipzm1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NQGzmIKt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/fx3fbjk4jc7326eipzm1.png" alt="live conference Mux simulcast"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Extra details if you’re curious
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Mux supports live streams for up to 12 hours. If you have one single stream of an all day conference then this should be enough.&lt;/li&gt;
&lt;li&gt;There are two options for playing the live stream on your website, you can either use a &lt;code&gt;playback_id&lt;/code&gt; associated directly with the live stream, OR you can use the &lt;code&gt;playback_id&lt;/code&gt; associated with the &lt;code&gt;active_asset&lt;/code&gt; that is associated with the live stream. The former will not allow seeking backwards in the stream, the latter will allow your attendees (if they want) to seek all the way back to the beginning. This is a subtle detail but it might be something you want to consider.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re doing an online conference, please get in touch!&lt;/p&gt;

</description>
      <category>video</category>
    </item>
    <item>
      <title>&lt;video autoplay&gt; Considered Harmful</title>
      <dc:creator>Dylan Jhaveri</dc:creator>
      <pubDate>Thu, 23 Jan 2020 17:19:28 +0000</pubDate>
      <link>https://dev.to/mux/video-autoplay-considered-harmful-52d6</link>
      <guid>https://dev.to/mux/video-autoplay-considered-harmful-52d6</guid>
      <description>&lt;p&gt;If you’re trying to autoplay videos on the web, you might be tempted to reach for the &lt;a href="https://www.w3schools.com/tags/att_video_autoplay.asp"&gt;HTML5 autoplay attribute&lt;/a&gt;. This sounds exactly like what you’re looking for, right? Well, not exactly. Let’s talk about why that’s probably not what you’re looking for and what the better option is.&lt;/p&gt;

&lt;h2&gt;
  
  
  Browsers will block your autoplay attempts
&lt;/h2&gt;

&lt;p&gt;Over the last few years, all major browser vendors have taken steps to aggressively block autoplaying videos on webpages. Safari announced some policy changes in &lt;a href="https://webkit.org/blog/7734/auto-play-policy-changes-for-macos/"&gt;June 2017&lt;/a&gt; and &lt;a href="https://developers.google.com/web/updates/2017/09/autoplay-policy-changes"&gt;Chrome followed suit&lt;/a&gt; shortly after &lt;a href="https://support.mozilla.org/en-US/kb/block-autoplay"&gt;and Firefox after that&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In summary: all these browsers will aggressively block videos from autoplaying on webpages. Each browser has slightly different rules around how it makes this decision. It’s a huge black box and browsers will not tell you what their exact rules are. The default behavior is block most autoplay attempts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are however some conditions that make it more likely for autoplay to work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your video is muted with the muted attribute.&lt;/li&gt;
&lt;li&gt;The user has interacted with the page with a click or a tap.&lt;/li&gt;
&lt;li&gt;(Chrome - desktop) The user’s &lt;a href="https://developers.google.com/web/updates/2017/09/autoplay-policy-changes#mei"&gt;Media Engagement Index&lt;/a&gt; threshold has been crossed. Chrome keeps track of how often a user consumes media on a site and if a user has played a lot of media on this site then Chrome will probably allow autoplay.&lt;/li&gt;
&lt;li&gt;(Chrome - mobile) The user has added the site to their home screen.&lt;/li&gt;
&lt;li&gt;(Safari) Device is not in power-saving mode.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These conditions only make autoplay more likely, but remember that aside from these conditions, the user can override the browser’s default setting on a per-domain basis. This basically means that you can never rely on autoplay actually working.&lt;/p&gt;

&lt;h2&gt;
  
  
  Autoplay will probably work for you, but it will break for your users
&lt;/h2&gt;

&lt;p&gt;Even if you try to follow the rules above, autoplay is still a finicky beast. One thing to keep in mind (for Chrome at least) is that due to Chrome’s Media Engagement Index. When you are testing autoplay on your own site it will probably work for you (because you visit your site often and play content, your MEI score is high). But then when new users come to your site, it is likely to fail (because their MEI score is low). As a developer, this is incredibly frustrating and another reason to always avoid the autoplay attribute.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AsEQMD9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/3453zr9gg72dikggiydl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AsEQMD9K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/3453zr9gg72dikggiydl.jpg" alt="works on my machine"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What should I do instead?&lt;br&gt;
I’m not suggesting that you avoid autoplaying videos, but I am suggesting that you always avoid the autoplay attribute. There is a better way.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use &lt;code&gt;video.play()&lt;/code&gt; in javascript world, which returns a promise. If the promise resolves, then autoplay worked, if the promise rejects then autoplay was blocked.&lt;/li&gt;
&lt;li&gt;If the promise returned from &lt;code&gt;video.play()&lt;/code&gt; rejects, then show a play button in the UI so that the user can click to play (the default video &lt;code&gt;controls&lt;/code&gt; attribute will work just fine). If you are using your own custom controls and your javascript calls &lt;code&gt;video.play()&lt;/code&gt; again as the result of an event that bubbled up from a user click, then it will work.&lt;/li&gt;
&lt;li&gt;Consider starting with the video muted, this gives you a much lower chance of your &lt;code&gt;video.play()&lt;/code&gt; call rejecting. You will want to show some kind of “muted” icon in the UI that the user can click to unmute (again, the default video &lt;code&gt;controls&lt;/code&gt; attribute works great for that). You may notice that twitter and a lot of sites start videos in the muted state.&lt;/li&gt;
&lt;li&gt;Have I mentioned showing controls for your player? Always make sure controls for your player are accessible. We have seen sites try to get fancy and be too minimalist by hiding controls. Inevitably, they run into situations where autoplay fails and the user has no way of clicking to make the video play. Make sure you do not fall into this trap.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Here’s an example with vanilla javascript
&lt;/h2&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// get a reference to a &amp;lt;video&amp;gt; element&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;videoEl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;querySelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;video&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// attempt to call play() and catch if it fails&lt;/span&gt;
&lt;span class="nx"&gt;videoEl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;play&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Autoplay success!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;warning&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Autoplay error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Here’s an example with React
&lt;/h2&gt;

&lt;p&gt;If you look at mux.com you’ll see that we autoplay a video on the top of the home page.  I copied over how we did that and set up a demo here: &lt;a href="https://o9s4w.csb.app/"&gt;https://o9s4w.csb.app/&lt;/a&gt;. The code is copied below and you can fork it and play around with the &lt;a href="https://codesandbox.io/s/autoplay-example-react-o9s4w"&gt;Code Sandbox&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Notice that we’re doing a few things here:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Try to call &lt;code&gt;video.play()&lt;/code&gt; when the component loads.&lt;/li&gt;
&lt;li&gt;Show the user a play/pause state in the UI by using the default &lt;code&gt;controls&lt;/code&gt; attribute.&lt;/li&gt;
&lt;li&gt;Start with the video in the &lt;code&gt;muted&lt;/code&gt; state. Our video does not have audio, but if it did you would still want to start off in the muted state with the muted attribute, and show a mute/unmute icon in the UI.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useRef&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./styles.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;videoEl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useRef&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;attemptPlay&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;videoEl&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
      &lt;span class="nx"&gt;videoEl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt;
      &lt;span class="nx"&gt;videoEl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;play&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Error attempting to play&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="nx"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;attemptPlay&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt; &lt;span class="nx"&gt;className&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;App&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Autoplay&lt;/span&gt; &lt;span class="nx"&gt;example&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h1&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;video&lt;/span&gt;
          &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{{&lt;/span&gt; &lt;span class="na"&gt;maxWidth&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;100%&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;800px&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;0 auto&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}}&lt;/span&gt;
          &lt;span class="nx"&gt;playsInline&lt;/span&gt;
          &lt;span class="nx"&gt;loop&lt;/span&gt;
          &lt;span class="nx"&gt;muted&lt;/span&gt;
          &lt;span class="nx"&gt;controls&lt;/span&gt;
          &lt;span class="nx"&gt;alt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;All the devices&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
          &lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://stream.mux.com/6fiGM5ChLz8T66ZZiuzk1KZuIKX8zJz00/medium.mp4&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
          &lt;span class="nx"&gt;ref&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;videoEl&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



</description>
    </item>
    <item>
      <title>No BART terminals were hacked in the making of this ad</title>
      <dc:creator>Dylan Jhaveri</dc:creator>
      <pubDate>Tue, 21 Jan 2020 22:55:36 +0000</pubDate>
      <link>https://dev.to/mux/no-bart-terminals-were-hacked-in-the-making-of-this-ad-1efo</link>
      <guid>https://dev.to/mux/no-bart-terminals-were-hacked-in-the-making-of-this-ad-1efo</guid>
      <description>&lt;p&gt;Originally posted by my colleague Bonnie Pecevich on &lt;a href="https://mux.com/blog/no-bart-terminals-were-hacked-in-the-making-of-this-ad"&gt;mux.com/blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the process of creating a BART ad for the first time, we had some learnings that we thought we would share that could hopefully help someone else on their out-of-home ad buying journey. (We’ll also remember to follow our own advice for next time.)&lt;/p&gt;

&lt;p&gt;Our learnings:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ask upfront for explicit restrictions on creative.&lt;/li&gt;
&lt;li&gt;Build in extra time for more than one round of feedback and time to iterate on design.&lt;/li&gt;
&lt;li&gt;Be realistic about what’s feasible, especially with an aggressive timeline.&lt;/li&gt;
&lt;li&gt;Submit a draft of the concept and see if they’ll approve it before you spend extra time finalizing the details (and telling everyone about it at the company all hands meeting.)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of these learnings actually stemmed from one preeminent learning:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BART doesn’t allow any &lt;code&gt;code&lt;/code&gt; on their ads. 😲&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why put code in an ad?
&lt;/h2&gt;

&lt;p&gt;First, an introduction–Mux is a startup that does video, and one of our aspirational goals is for every developer to know that. With our headquarters located in San Francisco, we’re aware that our city has a great supply of developers so we thought we’d try advertising in some well-traveled, public spaces.&lt;/p&gt;

&lt;p&gt;Doing ads in a BART station (the underground transit system in the Bay Area) is generally assumed to be expensive, maybe even beyond the reach of a startup which is what we thought, too. But we learned doing an ad could fit in our budget if we were flexible on timing–we were able to sign up for a single digital display at the Montgomery BART station with a 12/30/19 start date. Even though that only gave us about 2 weeks to create an ad (ignore the wailing coming from our one and only in-house designer), we were excited!&lt;/p&gt;

&lt;p&gt;Since the ad is just :15 seconds long with a not-so-captive audience, we wanted to create something that quickly caught the attention of developers. We thought we could achieve this by showing a terminal with a blinking cursor and then typed code to show use of our API. Sure, it crossed our minds that a blank screen with a blinking cursor might look like the screen is broken (which adds to the eye-catching-ness), so we added browsers to frame the terminal and added our logo to the top left corner. Our hope was that someone would take away that Mux is for developers and, if we were lucky, that we do something with video.&lt;/p&gt;

&lt;h2&gt;
  
  
  Insert wrench here
&lt;/h2&gt;

&lt;p&gt;The process was to submit a final file at least a week in advance of the live date to include time to get BART’s approval. There weren’t any specific guidelines beforehand on what’s allowed and what’s not but we assumed some common sense restrictions would apply like no explicit/harmful language imagery, etc. We figured getting BART’s approval would be relatively simple, like checking a box.&lt;/p&gt;

&lt;p&gt;Wrong. Our ad was rejected! We received feedback that the beginning of the ad that showed the terminal could give the impression that “the screen is malfunctioning or has been hacked into.”&lt;/p&gt;

&lt;p&gt;Busted. Turns out they also thought having a terminal on the screen would be eye-catching but not in a good way. We did feel a bit deflated, though, as we were all ready for our BART debut.&lt;/p&gt;

&lt;p&gt;We went through the five stages of grief and settled on “Bargaining.” We tried to come up with a creative solution where we could still use the same ad. Hey, what if we could add a persistent banner to the ad that said something like “Don’t worry, no BART terminals were hacked in the making of this ad.”?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5tOrn7XL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/uvhwgosi5e7dq6zwc1px.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5tOrn7XL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/uvhwgosi5e7dq6zwc1px.jpg" alt="No hack banner"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or what if we stylized the terminal so it looked more illustrated and cartoon-y?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SyM6DyUd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/qbing997fzrdw4323v9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SyM6DyUd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/qbing997fzrdw4323v9e.png" alt="Cartoon code ad"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alas, BART held firm and said, in no uncertain terms, &lt;strong&gt;“Nothing involving coding.”&lt;/strong&gt; Since we couldn’t come up with a brand new design in 48 hours, our plans for a BART ad had to be put on hold.&lt;/p&gt;

&lt;h2&gt;
  
  
  Silver lining
&lt;/h2&gt;

&lt;p&gt;All is not lost! We used the final video for &lt;a href="https://mux.com/"&gt;our homepage&lt;/a&gt; and are genuinely excited at how it came out.&lt;/p&gt;

&lt;p&gt;Although the BART approval process is still a bit of a black box, we're excited to continue to work with the same ad agency and pursue our out-of-home ad dreams. We’re looking forward to iterating our design and hopefully making a public appearance at CalTrain in the very near future. And if you see our ad, you’ll know the journey it took to get that little video up on those screens.&lt;/p&gt;

</description>
      <category>devrel</category>
    </item>
    <item>
      <title>In defense of 'flicks' (or how I learned to stop worrying and love 705600000)</title>
      <dc:creator>Dylan Jhaveri</dc:creator>
      <pubDate>Tue, 26 Nov 2019 18:26:56 +0000</pubDate>
      <link>https://dev.to/mux/in-defense-of-flicks-or-how-i-learned-to-stop-worrying-and-love-705600000-dk6</link>
      <guid>https://dev.to/mux/in-defense-of-flicks-or-how-i-learned-to-stop-worrying-and-love-705600000-dk6</guid>
      <description>&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; originally posted by my colleague on &lt;a href="https://mux.com/blog/in-defense-of-flicks-or-how-i-learned-to-stop-worrying-and-love-705600000/"&gt;the mux blog&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;About 2 years ago, the Oculus VR division of THE FACEBOOK created a &lt;a href="https://github.com/OculusVR/Flicks"&gt;project they called 'flicks'&lt;/a&gt;. Essentially a flick is just a really big number, specifically the number 705,600,000. This project was picked up by some news outlets like &lt;a href="https://techcrunch.com/2018/01/22/facebook-invented-a-new-time-unit-called-the-flick-and-its-truly-amazing/"&gt;TechCrunch&lt;/a&gt;, &lt;a href="https://www.theverge.com/tldr/2018/1/22/16920740/facebook-unit-of-time-flicks-frame-rate-ticks-github-nanosecond-second"&gt;The Verge&lt;/a&gt;, and the &lt;a href="https://www.bbc.com/news/technology-42787529"&gt;BBC&lt;/a&gt; and seems to cause some confusion and even some ridicule. To be fair, a news article about a number is a bit odd. If you’re not an engineer working with digital media, the idea behind this number is difficult to grasp. And to those who do work in digital media, the number seems to not offer anything new. It purports to solve a problem that nobody in the industry actually has. So where did it come from, and why does it exist? Let’s back up…&lt;/p&gt;

&lt;p&gt;Time is a surprisingly difficult concept in digital media. For starters, we are dealing with time values that are very small and difficult to imagine. Recently I saw the movie Gemini Man at the AMC Metreon here in San Francisco. It was one of the few theaters capable of playing the 120 frames per second version, whereas most films are 24 frames per second. At 120fps, every frame is projected for just over 0.00833 seconds before flashing the next one– a very short period of time. But compared to digital audio, this is an eternity. Audio recorded at 44100hz has a sample every 0.000022675 seconds. That is 367.5 times more audio samples than video frames. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PnrqbosK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/fvnik5s7n05h8oyoue5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PnrqbosK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/fvnik5s7n05h8oyoue5a.png" alt="Example audio visual timeline"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Example audio visual timeline&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These numbers with fractional components (a decimal point) are known as “floating point” numbers in computer science and computers are surprisingly bad at dealing with them. Above when I said every video frame was on the screen for 0.00833 seconds, that was not exactly true. When you divide 1 by 120 The number result is 0.008333333333 with the 3 repeating forever. For a computer to store a number that repeats forever would require an infinite amount of memory, so the number is approximated. The difference between the approximation and the actual number results in tiny errors in the math. These small errors can add up over time and become big errors and could result in problems such as audio and video becoming out of sync. Using a unit of time like milliseconds or nanoseconds would help, but would only delay the problem and not solve it.&lt;/p&gt;

&lt;p&gt;The ultimate solution is to not record time in seconds but instead as an integer number of fractional units. For example &lt;code&gt;1000 x 1 ÷ 120&lt;/code&gt; is the 1000th frame of a 120fps video. Converting to seconds we still end up with a floating point number but as long as we count frames as integers the error does not accumulate over time. If you’re following the math closely you may have noticed that while solving this problem we have created another one. &lt;/p&gt;

&lt;p&gt;What frame should we render first? The video frame at &lt;code&gt;1000 x 1 ÷ 120&lt;/code&gt;, or the audio sample at &lt;code&gt;367500 x 1 ÷ 44100&lt;/code&gt;? We need to convert to a common time base to know for sure. We could convert to seconds then compare, but that brings us back again to the floating point problem. By using the “least common multiple” or LCM, of the ratios, 88,200 in this case, we can convert these fractions to a common time base at which point we can compare them. &lt;code&gt;88,200 ÷ 44100 x 367500 = 735,000&lt;/code&gt;, and &lt;code&gt;88,200 ÷ 120 x 1000 = 735,000&lt;/code&gt;. These time stamps are exactly the same and should be rendered together to ensure sync. At no time did we need to use floating point math, which may have given us a slightly different answer.&lt;/p&gt;

&lt;p&gt;In the world of digital media there are some time bases that come up very frequently. As stated, film commonly uses 24 fps, European television (and other &lt;a href="https://en.wikipedia.org/wiki/PAL"&gt;PAL&lt;/a&gt; countries) use 25 fps, and for obscure reasons, American television (and other &lt;a href="https://en.wikipedia.org/wiki/NTSC"&gt;NTSC&lt;/a&gt; countries) use 29.97 fps. Wait! Floating point numbers again? Actually no, because it’s not really 29.97fps. It’s actually &lt;code&gt;30000 ÷ 1001&lt;/code&gt; fps. &lt;/p&gt;

&lt;p&gt;Here is where flicks come in. You see &lt;code&gt;705600000 ÷ 44100 = 16000&lt;/code&gt; EXACTLY, &lt;code&gt;705600000 ÷ 120 = 5,880,000&lt;/code&gt; EXACTLY, and even &lt;code&gt;1001 x 705600000 ÷ 30000 = 23,543,520&lt;/code&gt; EXACTLY. This is why the flick is interesting, it has a special property of being the least common multiple of many of the commonly used timebases in digital media.&lt;/p&gt;

&lt;p&gt;We now know what the flick is, Buy why? We have established that if we record every time stamp as 3 integers, a numerator, a denominator and a multiplier, we don’t need a common base since we can convert between them as needed. There are two primary reasons. First is efficiency. If we know we will need to compare a lot of time stamps in different time bases, or compare the same timestamp multiple times, converting them to a common base can be faster for a computer. Once converted to a common base, comparing two numbers is probably the fastest operation a computer can do. Whereas comparing two fractions requires an algorithm to find a common base, convert the value, and then compare. But the motivation cited in the flicks github page is slightly different, and caused by a design decision in the C++ programing language.&lt;/p&gt;

&lt;p&gt;Computers are pretty good at dealing with time, but humans are really bad at it. In most parts of the United States, once a year for daylight saving, we have a 25 hour, only to have a 23 hour day a few months later. Every 4 years, we get an extra day at the end of February, unless the year is divisible by 100; except when the year is also evenly divisible by 400 then it is a leap year (there was no leap day in the year 1900). We even have leap seconds, where we add an extra second to a year whenever we notice that the atomic clocks don't quite agree with astronomical observations. Meanwhile a computer's clock needs to keep moving forward one second per second otherwise bad things happen.&lt;/p&gt;

&lt;p&gt;To help with this human nonsense and standardize how to manage time, C++11 added a new package called chrono to its standard library. Because the language designers were smart, the &lt;a href="https://en.cppreference.com/w/cpp/chrono/duration"&gt;std::chrono::duration&lt;/a&gt; time type included support for the time as a ratio technique we have established. Perfect! Well... not so fast. Because the language designers were unwilling to give up more cpu cycles and slow down programs (C++’s defining feature is speed after all), it was decided that the time base must be known in advance while writing the program (compile time). This allows for fast running programs because the fractions can be ignored when they are known to be equal, but it sacrifices automatic time base conversions when they are not. Herein lies the problem. When playing back a media file we can’t know the time base in advance, we haven't seen the file yet. What we need is a time base that can support any media we are likely to encounter. Enter flicks. An elegant solution to a problem only a handful of media engineers will ever encounter that happened to be announced on a slow news day.&lt;/p&gt;

&lt;p&gt;Flicks does not seem to be in wide use. I used it at Mux in one specific place in our transcoding pipeline with its intended purpose. I used it with a C++11 program where utilizing std::chrono made things a bit easier to standardize on. But searching GitHub and Google, I could only find a handful of places where it is used in the wild, and I really don't expect that to change.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Using Netlify Functions to Create Signing Tokens</title>
      <dc:creator>Dylan Jhaveri</dc:creator>
      <pubDate>Thu, 07 Nov 2019 20:38:31 +0000</pubDate>
      <link>https://dev.to/mux/using-netlify-functions-to-create-signing-tokens-25i6</link>
      <guid>https://dev.to/mux/using-netlify-functions-to-create-signing-tokens-25i6</guid>
      <description>&lt;p&gt;Have you used cloud functions yet? They come in many flavors: Amazon Lambda, Cloudflare Workers, Zeit Serverless Functions, and the one we’re using here: Netlify Functions.&lt;/p&gt;

&lt;p&gt;Cloud functions are an essential component of the JAMstack. I had not heard the name JAMstack until recently. For the uninitiated (like me) it stands for Javascript, APIs and Markup. You may have seen JAMstack technologies like Gatsby, Next.js and tools of this nature that focus on performance, new developer tooling, and leveraging CDNs to serve pre-compiled HTML pages. I will be at &lt;a href="https://jamstackconf.com/sf/"&gt;JAMstack Conf 2019 in SF&lt;/a&gt;, if you will be there too, then come find me and say hi!&lt;/p&gt;

&lt;p&gt;All the code in this post is open source here on GitHub in our examples repo: &lt;a href="https://github.com/muxinc/examples/tree/master/signed-playback-netlify"&gt;muxinc/examples/signed-playback-netlify&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Main Benefits of Cloud Functions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Run code close to your clients (clients can be browsers, mobile apps, internet-of-things devices, self-driving cars, drones, anything that is talking to your server). Like a CDN, cloud functions are deployed to edge data centers to minimize latency between your clients and the server that runs their code.&lt;/li&gt;
&lt;li&gt;Protect your origin servers from being flooded with traffic. Cloud functions are a good way to cache data and intercept requests and respond to your users before they reach your origin servers. This means less bandwidth and CPU that your origin servers have to process.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Cloud functions, like Netlify Functions, might be a good option for you if you are using Mux’s Signed URLs feature.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Little Background About Signed Urls
&lt;/h2&gt;

&lt;p&gt;When you create a video asset via Mux’s &lt;code&gt;POST /video&lt;/code&gt; API you can also create a Playback ID (&lt;a href="https://docs.mux.com/docs/video"&gt;Mux API docs&lt;/a&gt;) and specify the &lt;code&gt;playback_policy&lt;/code&gt; as either &lt;code&gt;"public"&lt;/code&gt; or &lt;code&gt;"signed"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A “public” playback policy can be played back on any site, in any player and does not have an expiration. A “signed” playback policy requires that when the playback URL is requested from a player, it has to be accompanied by a “token” param that is generated and signed on your server.&lt;/p&gt;

&lt;p&gt;This is how it looks:&lt;/p&gt;

&lt;p&gt;public playback URL:&lt;br&gt;
&lt;code&gt;https://stream.mux.com/${playbackId}.m3u8&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;signed playback URL:&lt;br&gt;
&lt;code&gt;https://stream.mux.com/${playbackId}.m3u8?token=${token}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;token&lt;/code&gt; param is what you need to create on your server in order for the playback URL to work.&lt;/p&gt;
&lt;h2&gt;
  
  
  Create a Mux Asset
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Sign up for a &lt;a href="https://mux.com"&gt;mux.com&lt;/a&gt; account (free account comes with $20 credit)&lt;/li&gt;
&lt;li&gt;Go to &lt;a href="https://mux.com/settings/access-tokens"&gt;settings/access-tokens&lt;/a&gt; and click “Generate new token” to create a token you can use for API calls&lt;/li&gt;
&lt;li&gt;Copy your token id (we'll call this &lt;code&gt;MUX_TOKEN_ID&lt;/code&gt;) and secret (&lt;code&gt;MUX_TOKEN_SECRET&lt;/code&gt;). You will need these to make two api calls.&lt;/li&gt;
&lt;li&gt;Create a Mux video asset with a “signed” playback policy
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://api.mux.com/video/v1/assets \
  -X POST \
  -H "Content-Type: application/json" \
  -u ${MUX_TOKEN_ID}:${MUX_TOKEN_SECRET} \
  -d '{ "input": "https://storage.googleapis.com/muxdemofiles/mux-video-intro.mp4", "playback_policy": "signed" }' 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Copy the &lt;code&gt;playback_id&lt;/code&gt; from the response, you will need this later.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Create a URL Signing Key
&lt;/h2&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://api.mux.com/video/v1/signing-keys \
  -X POST \
  -H "Content-Type: application/json" \
  -u ${MUX_TOKEN_ID}:${MUX_TOKEN_SECRET}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Copy the &lt;code&gt;id&lt;/code&gt; (&lt;code&gt;MUX_TOKEN_ID&lt;/code&gt;) and the &lt;code&gt;private_key&lt;/code&gt; (&lt;code&gt;MUX_PRIVATE_KEY&lt;/code&gt;) from the response, you will need these later. These are the keys you will need to create signed urls for playback.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Setup a Netlify project
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a new directory for your project &lt;code&gt;mkdir netlify-mux-signing &amp;amp;&amp;amp; cd netlify-mux-signing&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Install the Netlify CLI and run &lt;code&gt;netlify init&lt;/code&gt; to create a new project. You can choose to connect Netlify to a GitHub repository.&lt;/li&gt;
&lt;li&gt;If you’re starting from scratch, run &lt;code&gt;yarn init&lt;/code&gt; to create an empty &lt;code&gt;package.json&lt;/code&gt; and &lt;code&gt;git init&lt;/code&gt; to make this a git repository.&lt;/li&gt;
&lt;li&gt;Now you have a barebones project that is connected to Netlify, but nothing is in it yet (you can see there is a hidden and gitignored directory called &lt;code&gt;.netlify&lt;/code&gt; which Netlify uses to handle deploys and Netlify commands&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;yarn add netlify-lambda&lt;/code&gt; to install the netlify-lambda package into your project (it’s recommended to install this locally instead of globally).&lt;/li&gt;
&lt;li&gt;Run  &lt;code&gt;yarn add @mux/mux-node&lt;/code&gt; to add the Mux node SDK to your project&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 1: Create a module to generate a signing token
&lt;/h3&gt;

&lt;p&gt;Create a &lt;code&gt;src/&lt;/code&gt; folder in your project and let’s create a small module called &lt;code&gt;mux_signatures.js&lt;/code&gt;. It will export one function called signPlaybackId which takes a playback id and returns a token that is generated with &lt;code&gt;Mux.JWT.sign&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ./src/mux_signatures&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Mux&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@mux/mux-node&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;signPlaybackId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;playbackId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;Mux&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;JWT&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;playbackId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;keyId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MUX_SIGNING_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;keySecret&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MUX_PRIVATE_KEY&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Our lambda function is going to use this module in Step 2.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create a &lt;code&gt;sign_playback_id&lt;/code&gt; cloud function
&lt;/h3&gt;

&lt;p&gt;Create a Netlify Function entry point. This is the single function that will handle one request. The idomatic pattern for creating cloud functions is to do one file and one javascript function per route. We will create a directory called &lt;code&gt;functions/&lt;/code&gt; and add a file called &lt;code&gt;sign_playback_id.js&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ./functions/sign_playback_ids.js&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;keySecret&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MUX_PRIVATE_KEY&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;signPlaybackId&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./src/mux_signatures&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;queryStringParameters&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;playbackId&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;queryStringParameters&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;playbackId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Missing playbackId in query string&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}]})&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;signPlaybackId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;playbackId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;  &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;302&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;*&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;location&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`https://stream.mux.com/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;playbackId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.m3u8?token=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;errors&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Server Error&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Add netlify.toml
&lt;/h3&gt;

&lt;p&gt;Add a &lt;code&gt;netlify.toml&lt;/code&gt; file to the root directory and tell Netlify where your functions will live. This tells Netlify that before we deploy we are going to build our functions into the &lt;code&gt;./.netlify/functions&lt;/code&gt; directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    [build]
      functions = "./.netlify/functions"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Connect to git
&lt;/h3&gt;

&lt;p&gt;In order to use Netlify Functions you will now need to commit your code and push it up to a git repository like GitHub. Do that next and in Netlify’s dashboard connect your git repository to the Netlify project that you created. After connecting you git repository then &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Set your environment variables
&lt;/h3&gt;

&lt;p&gt;In your Netlify project dashboard, naviate to "Settings" &amp;gt; "Deploys" &amp;gt; “Environment” to set your environment variables. Enter the &lt;code&gt;MUX_SIGNING_KEY&lt;/code&gt; and &lt;code&gt;MUX_PRIVATE_KEY&lt;/code&gt; from the Create a URL Signing Key step above.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Test in development
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open one terminal and run &lt;code&gt;netlify dev&lt;/code&gt; this will start a local Netlify dev server&lt;/li&gt;
&lt;li&gt;Open another terminal window and run &lt;code&gt;netlify-lambda serve ./functions&lt;/code&gt; this will build your functions/, get them ready to handle requests and watch the filesystem for changes.&lt;/li&gt;
&lt;li&gt;In a third terminal window, curl your endpoint to test out the function (replace &lt;code&gt;&amp;lt;netlify-port&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;playback-id&amp;gt;&lt;/code&gt; with your values.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -I 'http://localhost:&amp;lt;netlify-port&amp;gt;/.netlify/functions/sign_playback_id?playbackId=&amp;lt;playback-id&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You should see a 302 (redirect) response with a &lt;code&gt;location&lt;/code&gt; header for the signed url.&lt;/p&gt;

&lt;p&gt;When you make any changes to your source files, &lt;code&gt;netlify-lambda serve&lt;/code&gt; will pick up on the changes and recompile the functions into &lt;code&gt;./.netlify/functions&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy
&lt;/h2&gt;

&lt;p&gt;When you’re ready to deploy, you can deploy from the command line with &lt;code&gt;netlify-lambda build ./functions &amp;amp;&amp;amp; netlify deploy --prod&lt;/code&gt;. This will build the functions and then push up the changes to Netlify.&lt;/p&gt;

&lt;p&gt;Try making a POST request to your cloud function on Netlify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -I 'https://&amp;lt;your-netlify-app&amp;gt;.netlify.com/.netlify/functions/sign_playback_id?playbackId=&amp;lt;playback-id&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Just like in dev, you should get back a 302 response with a &lt;code&gt;location&lt;/code&gt; header that points to the signed playback URL.&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://stream.mux.com/${playbackId}.m3u8?token=${token}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This is what your response should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;HTTP/2 302
access-control-allow-origin: *
cache-control: no-cache
location: https://stream.mux.com/jqi1UtiO3gccQ019UcYjGJTLO9Ee00TLMY.m3u8?token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IkFsVFZncktBVTYzVldIdVplcDEwMVhZUk5mbHozeDIxRiJ9.eyJleHAiOjE1NzE3NjE0NzMsImF1ZCI6InYiLCJzdWIiOiJqcWkxVXRpTzNnY2NRMDE5VWNZakdKVExPOUVlMDBUTE1ZIn0.i7oANZ6inmwmGVQjon4WEv_gKcqQ2v8GuQA8xuCBdT0Reegkm6WyTdU-VloZvAt7duaRR3-T8dt147vUQjM1n70CLi0996pwMejYWIbRHUMqrDBtsENHG8T9jtz-EJcBGONSzgs7fBQIVQx8xJvPuX4YqpylDK_lNX0-RDqfhz5THAfuyxzePJod709msD8kbHAqnIke5lHzbQNHuO2ecNFVCb2ZozW7XkIEctyLxrDAK1ITtQV8iHek3whwO9S05kM-5bQzomJEliN3mXBqCwMBmyIp8l88YKl59tVXDdU-l-cZvZjt1GYKv0J7shO-oBYcr00NmVKkP7bie_w50w
date: Tue, 15 Oct 2019 16:24:33 GMT
age: 0
server: Netlify
x-nf-request-id: 80484951-e7ff-46f3-b78e-1349b8514bec-1426623
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now, in your player, you can use your netlify function as the URL src. Here's an example for a web player (note that in order to get HLS in a &lt;code&gt;&amp;lt;video&amp;gt;&lt;/code&gt; tag to work outside of Safari you will need to use another library like [Video.js](&lt;a href="https://videojs.com"&gt;https://videojs.com&lt;/a&gt;" target="_blank" or &lt;a href="https://github.com/video-dev/hls.js/"&gt;HLS.js&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;video src="https://&amp;lt;your-netlify-project&amp;gt;.netlify.com/.netlify/functions/sign_playback_id?playbackId=&amp;lt;playback-id&amp;gt;"&amp;gt;&amp;lt;/video&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And here's an example on iOS with AVPlayer in Swift:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let url = URL(string: "https://&amp;lt;your-netlify-project&amp;gt;.netlify.com/.netlify/functions/sign_playback_id?playbackId=&amp;lt;playback-id&amp;gt;")

player = AVPlayer(url: url!)
player!.play()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The player will load the netlify URL, get the 302 redirect to the signed Mux URL and load the HLS manifest from stream.mux.com.&lt;/p&gt;

&lt;h2&gt;
  
  
  Restrict who can access your function
&lt;/h2&gt;

&lt;p&gt;Now that your cloud function is working, you can add some security around it to make sure you only allow authorized users to generate signed urls.&lt;/p&gt;

&lt;p&gt;For web players you will want to change this line in the &lt;code&gt;sign_playback_id.js&lt;/code&gt; function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'Access-Control-Allow-Origin': '*',
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You can use the Access-Control-Allow-Origin header to control the CORS rules for the resource.&lt;/p&gt;

</description>
      <category>serverless</category>
    </item>
    <item>
      <title>Phoenix LiveView: Build Twitch Without Writing JavaScript</title>
      <dc:creator>Dylan Jhaveri</dc:creator>
      <pubDate>Thu, 31 Oct 2019 20:21:14 +0000</pubDate>
      <link>https://dev.to/mux/phoenix-liveview-build-twitch-without-writing-javascript-436j</link>
      <guid>https://dev.to/mux/phoenix-liveview-build-twitch-without-writing-javascript-436j</guid>
      <description>&lt;p&gt;Phoenix LiveView is a new experiment that allows developers to build rich, real-time user experiences with server-rendered HTML. If you’re not familiar with Phoenix, it’s the fully-featured web framework for the Elixir programming language. At Mux we use Phoenix and Elixir to power our API. I decided to start playing around with LiveView to see what it’s capable of. The idea I had for an example app is Snitch, it’s like “Twitch,” but for snitches (put away your checkbooks potential investors). Under the hood, of course, we’re using &lt;a href="https://mux.com/live/"&gt;Mux Live Streaming&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;From the user’s perspective, first you create a “channel”. When that channel is created, Snitch will give you RTMP streaming credentials (just like Twitch does). As the user, you enter those streaming credentials into your mobile app or broadcast software and start streaming.&lt;/p&gt;

&lt;p&gt;Right here we have the perfect test case for LiveView. In the UI we show the user the streaming credentials and are now waiting for them to start streaming. Mux is going to &lt;a href="https://docs.mux.com/docs/webhooks#section-live-stream-events"&gt;send webhooks&lt;/a&gt; to our server when relevant events happen. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;video.live_stream.connected&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;video.live_stream.recording&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;video.live_stream.active&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;video.live_stream.disconnected&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In a typical web application, without LiveView, common solutions are to either use websockets to push new data to client applications or have those applications poll the server. About every second or so the browser would send a request to the server to get the updated data. &lt;strong&gt;But now with LiveView when a webhook hits the server we can re-render on the server-side and push those changes to the client.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Using LiveView to Handle Webhooks
&lt;/h2&gt;

&lt;p&gt;The first step is to follow the instructions on the Installation page to add LiveView to your Phoenix application. This includes adding the dependency, exposing the WebSocket route and the &lt;code&gt;phoenix_live_view&lt;/code&gt; JavaScript package for the client-side.&lt;/p&gt;

&lt;p&gt;After following the installation instructions, let’s add a route for the Mux Webhooks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight elixir"&gt;&lt;code&gt;    &lt;span class="n"&gt;scope&lt;/span&gt; &lt;span class="s2"&gt;"/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;SnitchWeb&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
      &lt;span class="n"&gt;pipe_through&lt;/span&gt; &lt;span class="ss"&gt;:api&lt;/span&gt;
      &lt;span class="n"&gt;post&lt;/span&gt; &lt;span class="s2"&gt;"/webhooks/mux"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;WebhookController&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:mux&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Then, in the Mux UI we can add this as our webhooks route. For local development I’m using ngrok to receive webhooks on my localhost server.&lt;/p&gt;

&lt;p&gt;The webhook controller is going to receive the payload and update the “channel” in our database by calling &lt;code&gt;Snitch.Channels.update_channel&lt;/code&gt; . Let’s look at the &lt;code&gt;update_channel/2&lt;/code&gt; function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="c1"&gt;# lib/snitch/channels.ex &lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;update_channel&lt;/span&gt;&lt;span class="p"&gt;(%&lt;/span&gt;&lt;span class="no"&gt;Channel&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;channel&lt;/span&gt;
  &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;Channel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;changeset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attrs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="no"&gt;Repo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;notify_subs&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;code&gt;notify_subs/1&lt;/code&gt; is the new function we are going to call when a channel gets updated. This is where the LiveView magic happens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="c1"&gt;# lib/snitch/channels.ex &lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;notify_subs&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="no"&gt;Phoenix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;PubSub&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;broadcast&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Snitch&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;PubSub&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"channel-updated:&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This function is going to broadcast a message so that subscribers can react to this change. More on that shortly.&lt;/p&gt;

&lt;p&gt;Now let’s update the controller and tell the controller to render with LiveView:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="c1"&gt;# lib/snitch_web/controllers/channel_controller.ex &lt;/span&gt;
&lt;span class="no"&gt;Phoenix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;LiveView&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Controller&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;live_render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;conn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;SnitchWeb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;LiveChannelView&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="ss"&gt;session:&lt;/span&gt; &lt;span class="p"&gt;%{&lt;/span&gt;&lt;span class="ss"&gt;channel:&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And let’s create &lt;code&gt;SnitchWeb.LiveChannelView&lt;/code&gt;. When we call &lt;code&gt;notify_subs()&lt;/code&gt; up above, this &lt;code&gt;LiveChannelView&lt;/code&gt; is the code that needs to subscribe and push an update to the client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight elixir"&gt;&lt;code&gt;&lt;span class="k"&gt;defmodule&lt;/span&gt; &lt;span class="no"&gt;SnitchWeb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;LiveChannelView&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="no"&gt;Phoenix&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;LiveView&lt;/span&gt;

  &lt;span class="c1"&gt;#&lt;/span&gt;
  &lt;span class="c1"&gt;# When the controller calls live_render/3 this mount/2 function will get called&lt;/span&gt;
  &lt;span class="c1"&gt;# after the mount/2 function finishes then the render/1 function will get called&lt;/span&gt;
  &lt;span class="c1"&gt;# with the assigns&lt;/span&gt;
  &lt;span class="c1"&gt;#&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;mount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;channel&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:channel&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;connected?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;SnitchWeb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Endpoint&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"channel-updated:&lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="ss"&gt;:ok&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="n"&gt;set_assigns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(%{&lt;/span&gt;&lt;span class="ss"&gt;playback_url:&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;assigns&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; 
    &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;SnitchWeb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;ChannelView&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"show.html"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;assigns&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;assigns&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;SnitchWeb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;ChannelView&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"show_active.html"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;assigns&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="c1"&gt;#&lt;/span&gt;
  &lt;span class="c1"&gt;# Since the mount/2 function called "subscribe" to with the identifier&lt;/span&gt;
  &lt;span class="c1"&gt;# "channel-updated:#{channel.id}" then anytime data is broadcast this&lt;/span&gt;
  &lt;span class="c1"&gt;# handle_info/2 function will run and we have the power to set new values&lt;/span&gt;
  &lt;span class="c1"&gt;# with set_assigns/2&lt;/span&gt;
  &lt;span class="c1"&gt;#&lt;/span&gt;
  &lt;span class="c1"&gt;# After we assign new values, the render/1 function will get called with the&lt;/span&gt;
  &lt;span class="c1"&gt;# new assigns&lt;/span&gt;
  &lt;span class="c1"&gt;#&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;handle_info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="ss"&gt;:noreply&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="n"&gt;set_assigns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="n"&gt;set_assigns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;socket&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="n"&gt;playback_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Snitch&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="no"&gt;Channels&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;playback_url_for_channel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;socket&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;name:&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;status:&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mux_resource&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;connected:&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;mux_resource&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"connected"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;stream_key:&lt;/span&gt; &lt;span class="n"&gt;channel&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stream_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="o"&gt;|&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;assign&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;playback_url:&lt;/span&gt; &lt;span class="n"&gt;playback_url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To summarize what’s happening above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;live_render/3&lt;/code&gt; will invoke &lt;code&gt;mount/2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mount/2&lt;/code&gt; will subscribe using an identifier (&lt;code&gt;"channel-updated:#{channel.id}"&lt;/code&gt;) and set_assigns for the view&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;render/1&lt;/code&gt; will get called with the assigns&lt;/li&gt;
&lt;li&gt;anytime somewhere else in the app broadcasts to (&lt;code&gt;"channel-updated:#{channel.id}"&lt;/code&gt;), this view is going to call &lt;code&gt;handle_info/2&lt;/code&gt; and that gives us the opportunity to use &lt;code&gt;set_assigns&lt;/code&gt; again to update the assigns and re-render the template&lt;/li&gt;
&lt;li&gt;re-renders auto-magically get pushed to the client over a websocket and the client updates the dom&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The only difference in the &lt;code&gt;show&lt;/code&gt; and &lt;code&gt;show_active&lt;/code&gt; templates that we use in LiveChannelView is that instead of &lt;code&gt;eex&lt;/code&gt; extension we use the &lt;code&gt;leex&lt;/code&gt; extension which stands for live embedded elixir.&lt;/p&gt;

&lt;p&gt;Here is a webapp where this is currently deployed at &lt;a href="https://snitch.world"&gt;snitch.world&lt;/a&gt; The full code is up here &lt;a href="https://github.com/dylanjha/snitch"&gt;on github&lt;/a&gt;. You can clone it and run it yourself. You’ll also need to sign up for a free account with Mux to get an API key. Feel free to reach out if you have any questions!&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/3IQlN_Ax6mg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>elixir</category>
    </item>
  </channel>
</rss>
