<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Randy Findley</title>
    <description>The latest articles on DEV Community by Randy Findley (@rgfindl).</description>
    <link>https://dev.to/rgfindl</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rgfindl"/>
    <language>en</language>
    <item>
      <title>Live Streaming Origin</title>
      <dc:creator>Randy Findley</dc:creator>
      <pubDate>Tue, 16 Jun 2020 11:53:08 +0000</pubDate>
      <link>https://dev.to/rgfindl/live-streaming-origin-11ka</link>
      <guid>https://dev.to/rgfindl/live-streaming-origin-11ka</guid>
      <description>&lt;p&gt;If you're just joining us, please take a look at part 1 in this 3-part series.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Part 1 - &lt;a href="https://dev.to/rgfindl/live-streaming-server-395j"&gt;Server&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Part 2 - &lt;a href="https://dev.to/rgfindl/live-streaming-proxy-225b"&gt;Proxy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Part 3 - Origin&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QgElOKw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://finbits.io/images/blog/live-streaming-origin.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QgElOKw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://finbits.io/images/blog/live-streaming-origin.jpg" alt="" title="Origin"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This blog post is about the Origin.  Remember we have 3 services in this architecture.  Proxy -&amp;gt; Server &amp;lt;- Origin&lt;/p&gt;

&lt;p&gt;How do we route &amp;amp; cache HTTP traffic to a fleet of RTMP Servers to serve the HLS files?&lt;/p&gt;

&lt;p&gt;We need a single endpoint like &lt;code&gt;live.finbits.io&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A user will play the the video using a URL like &lt;code&gt;live.finbits.io/&amp;lt;stream key&amp;gt;/live.m3u8&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;HTTP is &lt;strong&gt;stateless&lt;/strong&gt; so we can use an AWS ALB load balancer.  Yay!&lt;/p&gt;

&lt;p&gt;We also use AWS CloudFront as the CDN.  It looks like this.&lt;/p&gt;

&lt;p&gt;Route 53 -&amp;gt; CloudFront -&amp;gt; ALB -&amp;gt; Origin(s) -&amp;gt; Server(s).&lt;/p&gt;

&lt;p&gt;But how does the Origin know which Server to fetch the HLS files from?&lt;/p&gt;

&lt;p&gt;We use a Redis cache to store the mapping between "stream key" and Server "IP:PORT".  &lt;/p&gt;

&lt;p&gt;Our Origin is simply NGINX with a small backed that performs the Redis cache lookup.&lt;/p&gt;

&lt;p&gt;Let's take a look at our NGINX config.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;worker_processes  auto;

error_log /dev/stdout info;

events {
  worker_connections  1024;
}

http {
  include       /etc/nginx/mime.types;
  default_type  application/octet-stream;

  log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

  access_log /dev/stdout main;

  sendfile        on;

  keepalive_timeout  65;

  gzip on;

  proxy_cache_path /tmp/cache/ levels=1:2 keys_zone=CONTENTCACHE:10m max_size=15g inactive=10m use_temp_path=off;

  ignore_invalid_headers off;

  upstream node-backend {
    server localhost:3000 max_fails=0;
  }

  &amp;lt;% servers.forEach(function(server, index) { %&amp;gt;
  upstream media&amp;lt;%= index %&amp;gt;-backend {
    server &amp;lt;%= server %&amp;gt; max_fails=0;
  }
  &amp;lt;% }); %&amp;gt;

  server {
    listen 80;
    server_name localhost;
    sendfile off;

    &amp;lt;% servers.forEach(function(server, index) { %&amp;gt;
    location ~ ^/&amp;lt;%= server %&amp;gt;/(.*)$ {
      internal;
      proxy_pass http://media&amp;lt;%= index %&amp;gt;-backend/$1$is_args$args;
    }
    &amp;lt;% }); %&amp;gt;

    location ~ ^/(.*live\.m3u8)$ {
      #
      # Cache results on local disc
      #
      proxy_cache CONTENTCACHE;
      proxy_cache_lock on;
      proxy_cache_key $scheme$proxy_host$uri;
      proxy_cache_valid 1m;
      proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

      #
      # CORS
      #
      include /etc/nginx/nginx.cors.conf;

      #
      # Proxy Pass
      #
      proxy_pass http://node-backend/$1$is_args$args;
    }

    location ~ ^/(.*index\.m3u8)$ {
      #
      # Cache results on local disc
      #
      proxy_cache CONTENTCACHE;
      proxy_cache_lock on;
      proxy_cache_key $scheme$proxy_host$uri;
      proxy_cache_valid 1s;
      proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

      #
      # CORS
      #
      include /etc/nginx/nginx.cors.conf;

      #
      # Proxy Pass
      #
      proxy_pass http://node-backend/$1$is_args$args;
    }

    location ~ ^/(.*\.ts)$ {
      #
      # Cache results on local disc
      #
      proxy_cache CONTENTCACHE;
      proxy_cache_lock on;
      proxy_cache_key $scheme$proxy_host$uri;
      proxy_cache_valid 60s;
      proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504;

      #
      # CORS
      #
      include /etc/nginx/nginx.cors.conf;

      #
      # Proxy Pass
      #
      proxy_pass http://node-backend/$1$is_args$args;
    }

    location /healthcheck {
      proxy_pass http://node-backend/healthcheck$is_args$args;
    }

    location /nginx_status {
      stub_status on;

      access_log off;
      allow 127.0.0.1;
      deny all;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;When a request comes in, like &lt;code&gt;live.finbits.io/&amp;lt;stream key&amp;gt;/live.m3u8&lt;/code&gt;, it first hits the &lt;code&gt;node-backend&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;node-backend&lt;/code&gt; performs the Redis cache lookup, to get the Server IP:PORT, then responds with an internal NGINX redirect to the corresponding &lt;code&gt;media-backend&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;media-backend&lt;/code&gt; performs a proxy_pass to the Server to fetch the HLS.&lt;/p&gt;

&lt;p&gt;What about caching?  &lt;/p&gt;

&lt;p&gt;The cache headers are added by the Server. &lt;/p&gt;

&lt;p&gt;The Origin has an internal NGINX cache.  The goal is to reduce the load on the Servers as much as possible.  &lt;/p&gt;

&lt;p&gt;We use the following NGINX cache control to prevent a thundering herd run on the Server. &lt;code&gt;proxy_cache_lock on;&lt;/code&gt;  If we get many simultaneous requests before NGINX has the cache, NGINX will block all requests but 1 until the cache is populated.  This keeps our Servers safe.&lt;/p&gt;

&lt;p&gt;We finally propagate our cache headers all they way back to the CloudFront CDN which is our primary cache point.  &lt;/p&gt;

&lt;p&gt;What about CORS headers?&lt;/p&gt;

&lt;p&gt;We set the following CORS headers on every request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if ($request_method = 'OPTIONS') {
    add_header 'Access-Control-Allow-Origin' '*';
    add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
    #
    # Custom headers and headers various browsers *should* be OK with but aren't
    #
    add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
    #
    # Tell client that this pre-flight info is valid for 20 days
    #
    add_header 'Access-Control-Max-Age' 1728000;
    add_header 'Content-Type' 'text/plain; charset=utf-8';
    add_header 'Content-Length' 0;
    return 204;
}
if ($request_method = 'POST') {
    add_header 'Access-Control-Allow-Origin' '*';
    add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
    add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
if ($request_method = 'GET') {
    add_header 'Access-Control-Allow-Origin' '*';
    add_header 'Access-Control-Allow-Methods' 'GET, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
    add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Same as the Proxy we have to keep the fleet of Servers in-sync.&lt;/p&gt;

&lt;p&gt;When the service starts we fetch the list of Servers (IP:PORT) and add them to the NGINX configuration.  Now we're ready to route traffic.&lt;/p&gt;

&lt;p&gt;We then run a cron job to perform this action again, to pick up any new Servers.  NGINX is reloaded with zero downtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Service
&lt;/h2&gt;

&lt;p&gt;Now it's time to deploy our Origin.&lt;/p&gt;

&lt;p&gt;The Origin service has 2 stacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-origin/blob/master/origin/stacks/ecr.stack.yml"&gt;ecr&lt;/a&gt; - Docker image registry&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-origin/blob/master/origin/stacks/service.stack.yml"&gt;service&lt;/a&gt; - Fargate service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First create the docker ECR registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sh ./stack-up.sh ecr
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we can build, tag, and push the Docker image to the registry.  &lt;/p&gt;

&lt;p&gt;First update the &lt;a href="https://github.com/rgfindl/live-streaming-origin/blob/master/origin/package.json#L9-L13"&gt;package.json&lt;/a&gt; scripts to include your AWS account id.&lt;/p&gt;

&lt;p&gt;To build, tag, and push the Docker image to the registry, run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn run deploy &amp;lt;version&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we can deploy the service stack which will deploy our new image to Fargate.&lt;/p&gt;

&lt;p&gt;First update the &lt;code&gt;Version&lt;/code&gt; &lt;a href="https://github.com/rgfindl/live-streaming-origin/blob/master/origin/stacks/stack-up.sh#L21"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sh ./stack-up.sh service
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Your Origin should now be running in your ECS cluster as a Fargate task.  &lt;/p&gt;

&lt;p&gt;In conclusion... I think this architecture and implementation works pretty well.  It hasn't been battle tested.  Here are some things I'd still like to do and some areas for improvement.&lt;/p&gt;

&lt;p&gt;1.) Test with more clients &amp;amp; videos.&lt;/p&gt;

&lt;p&gt;I tested with VLC, FFMPEG, and my phone.  I pushed live video, screen recordings, and VOD's in a loop.  It always worked... but it would be good to test with many different clients before going to production.&lt;/p&gt;

&lt;p&gt;2.) Load testing.&lt;/p&gt;

&lt;p&gt;It would be a good idea to see how this architecture does under load.&lt;/p&gt;

&lt;p&gt;RTMP load testing?  The limiting factor is the number of Servers.  We can only push one stream per server.  Not much to test here.  Just need to make sure that one Server can handle a large video stream.&lt;/p&gt;

&lt;p&gt;HTTP load testing?  This could be done pretty easily using something like &lt;code&gt;wkr&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;3.) Auto scaling.&lt;/p&gt;

&lt;p&gt;This is the big one.  If we were to offer a service like Twitch, how would we scale up the number of Servers to meet the growing number of streamers?  How would we scale down to save costs and not terminate a users stream.  &lt;/p&gt;

&lt;p&gt;I think we'd need a custom Controller to perform the scaling and communication between all the services to update their configurations realtime instead of every minute via cron.&lt;/p&gt;

&lt;p&gt;That's it.  I hope you enjoyed learning about RTMP-to-HLS streaming.&lt;/p&gt;

&lt;p&gt;Original published here: &lt;a href="https://finbits.io/blog/live-streaming-origin/"&gt;https://finbits.io/blog/live-streaming-origin/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>livestreaming</category>
      <category>rtmp</category>
      <category>hls</category>
      <category>ffmpeg</category>
    </item>
    <item>
      <title>Live Streaming Proxy</title>
      <dc:creator>Randy Findley</dc:creator>
      <pubDate>Tue, 16 Jun 2020 11:48:18 +0000</pubDate>
      <link>https://dev.to/rgfindl/live-streaming-proxy-225b</link>
      <guid>https://dev.to/rgfindl/live-streaming-proxy-225b</guid>
      <description>&lt;p&gt;If you're just joining us, please take a look at part 1 in this 3-part series.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Part 1 - &lt;a href="https://dev.to/rgfindl/live-streaming-server-395j"&gt;Server&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Part 2 - Proxy&lt;/li&gt;
&lt;li&gt;Part 3 - &lt;a href="https://dev.to/rgfindl/live-streaming-origin-11ka"&gt;Origin&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--slcl7a7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://finbits.io/images/blog/live-streaming-proxy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--slcl7a7k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://finbits.io/images/blog/live-streaming-proxy.jpg" alt="" title="Proxy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This blog post is about the Proxy.  Remember we have 3 services in this architecture.  Proxy -&amp;gt; Server &amp;lt;- Origin&lt;/p&gt;

&lt;p&gt;How do we route &lt;strong&gt;stateful&lt;/strong&gt; RTMP traffic to a fleet of RTMP Servers?&lt;/p&gt;

&lt;p&gt;We need a single endpoint like &lt;code&gt;rtmp.finbits.io&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;A user will push RTMP using a URL like &lt;code&gt;rtmp.finbits.io/stream/&amp;lt;stream key&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;RTMP is stateful which makes load balancing and scaling much more challenging.  &lt;/p&gt;

&lt;p&gt;If this were a stateless application we'd use AWS ALB with an Auto Scaling Group.&lt;/p&gt;

&lt;p&gt;Can we use AWS ALB?  Nope, ALB doesn't support RTMP.&lt;/p&gt;

&lt;p&gt;An Auto Scaling Group might work but scaling down will be a challenge.  You wouldn't want to terminate a Server that a user is actively streaming to.&lt;/p&gt;

&lt;p&gt;Why not just redirect RTMP directly to the Servers?  This would be great but RTMP redirects are fairly new and not all clients support it.&lt;/p&gt;

&lt;p&gt;So, how do we route &lt;strong&gt;stateful&lt;/strong&gt; RTMP traffic to a fleet of RTMP Servers?&lt;/p&gt;

&lt;p&gt;We can use HAProxy and weighted Route53 DNS routing.  We could have n number of Origin (HAProxy) services running, all with public IPs, and then route traffic to all instances via Route53 weighted record sets.&lt;/p&gt;

&lt;p&gt;The only gotcha with this service is making sure it picks up any new Servers that are added to handle load.&lt;/p&gt;

&lt;p&gt;When the service starts we fetch the list of Servers (IP:PORT) and add them to the HAProxy configuration.  Now we're ready to route traffic.&lt;/p&gt;

&lt;p&gt;We then run a cron job to perform this action again, to pick up any new Servers.  HAProxy is reloaded with zero downtime.&lt;/p&gt;

&lt;p&gt;Let's take a look at the HAProxy config.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global
 pidfile /var/run/haproxy.pid
 maxconn &amp;lt;%= servers.length %&amp;gt;

defaults
 log global
 timeout connect 10s
 timeout client 30s
 timeout server 30s

frontend ft_rtpm
 bind *:1935 name rtmp
 mode tcp
 maxconn &amp;lt;%= servers.length %&amp;gt;
 default_backend bk_rtmp

frontend ft_http
 bind *:8000 name http
 mode http
 maxconn 600
 default_backend bk_http

backend bk_http
 mode http
 errorfile 503 /usr/local/etc/haproxy/healthcheck.http

backend bk_rtmp 
 mode tcp
 balance roundrobin
 &amp;lt;% servers.forEach(function(server, index) { %&amp;gt;
 server media&amp;lt;%= index %&amp;gt; &amp;lt;%= server %&amp;gt; check maxconn 1 weight 10
 &amp;lt;% }); %&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see we're using EJS template engine to generate the config.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;global&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global
 pidfile /var/run/haproxy.pid
 maxconn &amp;lt;%= servers.length %&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We store the pidfile, so we can use it to restart the service when our cron job runs to update the Servers list.&lt;/p&gt;

&lt;p&gt;'maxconn' is set to the number of Servers.  In our design each Server can only accept 1 connection.  FFMPEG draws a lot of CPU and we're using Fargate with lower CPU tasks.  &lt;/p&gt;

&lt;p&gt;You could use EC2's instead of Fargate with highly performant instance types.  Then you could handle more connections per Server.&lt;/p&gt;

&lt;p&gt;It might be cool to also use NVIDIA hardware acceleration with FFMPEG, but I didn't get that far.  It was getting kinda complicated with Docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;frontend&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;frontend ft_rtpm
 bind *:1935 name rtmp
 mode tcp
 maxconn &amp;lt;%= servers.length %&amp;gt;
 default_backend bk_rtmp
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Here we declare the RTMP frontend on port 1935.  It leverages the RTMP backed below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;backend&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;backend bk_rtmp 
 mode tcp
 balance roundrobin
 &amp;lt;% servers.forEach(function(server, index) { %&amp;gt;
 server media&amp;lt;%= index %&amp;gt; &amp;lt;%= server %&amp;gt; check maxconn 1 weight 10
 &amp;lt;% }); %&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The backend does the routing magic.  In our case it uses a simple &lt;code&gt;roundrobin&lt;/code&gt; load balancing algorithm.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;http&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The http frontend and backend serve a static http response for our Route 53 healthcheck.&lt;/p&gt;

&lt;h2&gt;
  
  
  Service
&lt;/h2&gt;

&lt;p&gt;Now it's time to deploy our Proxy.&lt;/p&gt;

&lt;p&gt;The Proxy service has 2 stacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-proxy/blob/master/proxy/stacks/ecr.stack.yml"&gt;ecr&lt;/a&gt; - Docker image registry&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-proxy/blob/master/proxy/stacks/service.stack.yml"&gt;service&lt;/a&gt; - Fargate service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First create the docker ECR registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sh ./stack-up.sh ecr
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we can build, tag, and push the Docker image to the registry.  &lt;/p&gt;

&lt;p&gt;First update the &lt;a href="https://github.com/rgfindl/live-streaming-proxy/blob/master/proxy/package.json#L9-L13"&gt;package.json&lt;/a&gt; scripts to include your AWS account id.&lt;/p&gt;

&lt;p&gt;To build, tag, and push the Docker image to the registry, run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn run deploy &amp;lt;version&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now we can deploy the service stack which will deploy our new image to Fargate.&lt;/p&gt;

&lt;p&gt;First update the &lt;code&gt;Version&lt;/code&gt; &lt;a href="https://github.com/rgfindl/live-streaming-proxy/blob/master/proxy/stacks/stack-up.sh#L21"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sh ./stack-up.sh service
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Your Proxy should now be running in your ECS cluster as a Fargate task.  &lt;/p&gt;

&lt;p&gt;Now on to Part 3, the &lt;a href="https://dev.to/rgfindl/live-streaming-origin-11ka"&gt;Origin&lt;/a&gt;, to learn how we route HTTP traffic to the appropriate HLS Server.&lt;/p&gt;

&lt;p&gt;Original published here: &lt;a href="https://finbits.io/blog/live-streaming-proxy/"&gt;https://finbits.io/blog/live-streaming-proxy/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>livestreaming</category>
      <category>rtmp</category>
      <category>hls</category>
      <category>ffmpeg</category>
    </item>
    <item>
      <title>Live Streaming Server</title>
      <dc:creator>Randy Findley</dc:creator>
      <pubDate>Tue, 16 Jun 2020 11:34:58 +0000</pubDate>
      <link>https://dev.to/rgfindl/live-streaming-server-395j</link>
      <guid>https://dev.to/rgfindl/live-streaming-server-395j</guid>
      <description>&lt;p&gt;I decided to build a live streaming server that accepts RTMP input and outputs Adaptive Bitrate (ABR) HLS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rgfindl/live-streaming-server" rel="noopener noreferrer"&gt;https://github.com/rgfindl/live-streaming-server&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I wanted users to be able to stream anytime using their private stream key.  Much like how Twitch, Facebook, and YouTube do it.&lt;/p&gt;

&lt;p&gt;I also wanted the live stream recorded and I wanted the user to be able to relay their live stream to other destinations like Twitch, Facebook, and YouTube.&lt;/p&gt;

&lt;p&gt;Here is a screenshot of my stream playing in the browser, facebook, twitch, and youtube.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffinbits.io%2Fimages%2Fblog%2Flive-streaming-server-example.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffinbits.io%2Fimages%2Fblog%2Flive-streaming-server-example.png" title="Example" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final architecture is actually 3 services: Proxy -&amp;gt; Server &amp;lt;- Origin&lt;/p&gt;

&lt;p&gt;I will cover the Proxy and the Origin in posts &lt;a href="https://dev.to/rgfindl/live-streaming-proxy-225b"&gt;2&lt;/a&gt; and &lt;a href="https://dev.to/rgfindl/live-streaming-origin-11ka"&gt;3&lt;/a&gt; in this blog post series.&lt;/p&gt;

&lt;p&gt;Take a look at the architecture:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffinbits.io%2Fimages%2Fblog%2Flive-streaming-server-full.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffinbits.io%2Fimages%2Fblog%2Flive-streaming-server-full.jpg" title="Architecture" width="781" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All 3 services are running as Docker containers on AWS Fargate.&lt;/p&gt;

&lt;p&gt;RTMP is sent to the Proxy at &lt;code&gt;rtmp.finbits.io&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;HLS is served by the Origin at &lt;code&gt;live.finbits.io&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The Redis cache stores the stream key to Server mapping so the Origin knows which Server to fetch the HLS from.  We could have many Servers to meet demand.&lt;/p&gt;

&lt;p&gt;S3 is used to store the recordings.  The recordings are single bitrate HLS.  The largest bitrate from the ABR.live&lt;/p&gt;

&lt;p&gt;All 3 services scale independently to meet demand.  The Server would scale the most.  Transcoding RTMP into ABR HLS is very CPU intensive.&lt;/p&gt;
&lt;h2&gt;
  
  
  Node Media Server
&lt;/h2&gt;

&lt;p&gt;For the RTMP Server I decided to use a fork of &lt;a href="https://github.com/illuspas/Node-Media-Server" rel="noopener noreferrer"&gt;Node Media Server&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rgfindl/Node-Media-Server" rel="noopener noreferrer"&gt;https://github.com/rgfindl/Node-Media-Server&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Node Media Server accepts RTMP on port 1935.  FFMPEG is then used to transcode the RTMP input into HLS.  FFMPEG is also used to relay to social media destinations.&lt;/p&gt;

&lt;p&gt;Why Node Media Server?&lt;/p&gt;

&lt;p&gt;It is actively maintained, it has a lot of github stars, and I like node.js.  &lt;/p&gt;

&lt;p&gt;I first tried &lt;a href="https://github.com/arut/nginx-rtmp-module" rel="noopener noreferrer"&gt;nginx-rtmp-module&lt;/a&gt;, because nginx is great.  But I couldn't get the social relay working the way I wanted.  Also, this project is no longer maintained and is pretty old.&lt;/p&gt;

&lt;p&gt;I also looked at &lt;a href="https://github.com/ossrs/srs" rel="noopener noreferrer"&gt;ossrs/srs&lt;/a&gt;, which seems to be based on nginx-rtmp-module.  It didn't seem as flexible, maybe because I'm not a c/c++ developer.&lt;/p&gt;

&lt;p&gt;Why fork Node Media Server?  What did I change?&lt;/p&gt;

&lt;p&gt;I added a few more options to the Node Media Server &lt;code&gt;config&lt;/code&gt; to get the HLS working.  Specifically the &lt;code&gt;config.trans.tasks&lt;/code&gt; object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const config = {
  ...
  trans: {
    tasks: [
      raw: [...], # FFMPEG command
      ouPaths: [...], # HLS output paths
      cleanup: false, # Don't delete the ouPaths, we'll do it later
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'll talk about each of these in more details below.&lt;/p&gt;

&lt;h2&gt;
  
  
  FFMPEG
&lt;/h2&gt;

&lt;p&gt;FFMPEG is used to transcode the rtmp input into 3 HLS outputs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;640 x 360&lt;/li&gt;
&lt;li&gt;842 x 480&lt;/li&gt;
&lt;li&gt;720 x 1280
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ffmpeg -hide_banner -y -fflags nobuffer -i rtmp://127.0.0.1:1935/stream/test \
  -vf scale=w=640:h=360:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v libx264 -preset veryfast -profile:v main -crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_list_size 6 -hls_flags delete_segments -max_muxing_queue_size 1024 -start_number 100 -b:v 800k -maxrate 856k -bufsize 1200k -b:a 96k -hls_segment_filename media/test/360p/%03d.ts media/test/360p.m3u8 \
  -vf scale=w=842:h=480:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v libx264 -preset veryfast -profile:v main -crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_list_size 6 -hls_flags delete_segments -max_muxing_queue_size 1024 -start_number 100 -b:v 1400k -maxrate 1498k -bufsize 2100k -b:a 128k -hls_segment_filename media/test/480p/%03d.ts media/test/480p.m3u8 \
  -vf scale=w=1280:h=720:force_original_aspect_ratio=decrease -c:a aac -ar 48000 -c:v libx264 -preset veryfast -profile:v main -crf 20 -sc_threshold 0 -g 48 -keyint_min 48 -hls_time 4 -hls_list_size 6 -hls_flags delete_segments -max_muxing_queue_size 1024 -start_number 100 -b:v 2800k -maxrate 2996k -bufsize 4200k -b:a 128k -hls_segment_filename media/test/720p/%03d.ts media/test/720p.m3u8

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What about the ABR playlist file? &lt;/p&gt;

&lt;p&gt;We create that once the first HLS playlist file is created.  It looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=800000,RESOLUTION=640x360
360p/index.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=1400000,RESOLUTION=842x480
480p/index.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=2800000,RESOLUTION=1280x720
720p/index.m3u8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Server
&lt;/h2&gt;

&lt;p&gt;Our Server does the following things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Takes RTMP input and converts it to HLS&lt;/li&gt;
&lt;li&gt;Creates an ABR HLS playlist&lt;/li&gt;
&lt;li&gt;Copies the highest bitrate HLS to S3&lt;/li&gt;
&lt;li&gt;Relay's RTMP to social destinations based on query parameters&lt;/li&gt;
&lt;li&gt;Exposes a hook for stream key validation&lt;/li&gt;
&lt;li&gt;Serves HLS via NGINX reverse proxy with cache headers and CORS&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  app.js
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/server/app.js" rel="noopener noreferrer"&gt;app.js&lt;/a&gt; does most of the work.  Let's take a look at that file in it's entirety.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const NodeMediaServer = require('node-media-server');
const _ = require('lodash');
const { join } = require('path');
const querystring = require('querystring');
const fs = require('./lib/fs');
const hls = require('./lib/hls');
const abr = require('./lib/abr');
const ecs = require('./lib/ecs');
const cache = require('./lib/cache');
const logger = require('./lib/logger');
const utils = require('./lib/utils');

const LOG_TYPE = 4;
logger.setLogType(LOG_TYPE);

// init RTMP server
const init = async () =&amp;gt; {
  try {
    // Fetch the container server address (IP:PORT)
    // The IP is from the EC2 server.  The PORT is from the container.
    const SERVER_ADDRESS = process.env.NODE_ENV === 'production' ? await ecs.getServer() : '';

    // Set the Node-Media-Server config.
    const config = {
      logType: LOG_TYPE,
      rtmp: {
        port: 1935,
        chunk_size: 60000,
        gop_cache: true,
        ping: 30,
        ping_timeout: 60
      },
      http: {
        port: 8080,
        mediaroot: process.env.MEDIA_ROOT || 'media',
        allow_origin: '*',
        api: true
      },
      auth: {
        api: false
      },
      relay: {
        ffmpeg: process.env.FFMPEG_PATH || '/usr/local/bin/ffmpeg',
        tasks: [
          {
            app: 'stream',
            mode: 'push',
            edge: 'rtmp://127.0.0.1/hls',
          },
        ],
      },
      trans: {
        ffmpeg: process.env.FFMPEG_PATH || '/usr/local/bin/ffmpeg',
        tasks: [
          {
            app: 'hls',
            hls: true,
            raw: [
              '-vf',
              'scale=w=640:h=360:force_original_aspect_ratio=decrease',
              '-c:a',
              'aac',
              '-ar',
              '48000',
              '-c:v',
              'libx264',
              '-preset',
              'veryfast',
              '-profile:v',
              'main',
              '-crf',
              '20',
              '-sc_threshold',
              '0',
              '-g',
              '48',
              '-keyint_min',
              '48',
              '-hls_time',
              '6',
              '-hls_list_size',
              '10',
              '-hls_flags',
              'delete_segments',
              '-max_muxing_queue_size',
              '1024',
              '-start_number',
              '${timeInMilliseconds}',
              '-b:v',
              '800k',
              '-maxrate',
              '856k',
              '-bufsize',
              '1200k',
              '-b:a',
              '96k',
              '-hls_segment_filename',
              '${mediaroot}/${streamName}/360p/%03d.ts',
              '${mediaroot}/${streamName}/360p/index.m3u8',
              '-vf',
              'scale=w=842:h=480:force_original_aspect_ratio=decrease',
              '-c:a',
              'aac',
              '-ar',
              '48000',
              '-c:v',
              'libx264',
              '-preset',
              'veryfast',
              '-profile:v',
              'main',
              '-crf',
              '20',
              '-sc_threshold',
              '0',
              '-g',
              '48',
              '-keyint_min',
              '48',
              '-hls_time',
              '6',
              '-hls_list_size',
              '10',
              '-hls_flags',
              'delete_segments',
              '-max_muxing_queue_size',
              '1024',
              '-start_number',
              '${timeInMilliseconds}',
              '-b:v',
              '1400k',
              '-maxrate',
              '1498k',
              '-bufsize',
              '2100k',
              '-b:a',
              '128k',
              '-hls_segment_filename',
              '${mediaroot}/${streamName}/480p/%03d.ts',
              '${mediaroot}/${streamName}/480p/index.m3u8',
              '-vf',
              'scale=w=1280:h=720:force_original_aspect_ratio=decrease',
              '-c:a',
              'aac',
              '-ar',
              '48000',
              '-c:v',
              'libx264',
              '-preset',
              'veryfast',
              '-profile:v',
              'main',
              '-crf',
              '20',
              '-sc_threshold',
              '0',
              '-g',
              '48',
              '-keyint_min',
              '48',
              '-hls_time',
              '6',
              '-hls_list_size',
              '10',
              '-hls_flags',
              'delete_segments',
              '-max_muxing_queue_size',
              '1024',
              '-start_number',
              '${timeInMilliseconds}',
              '-b:v',
              '2800k',
              '-maxrate',
              '2996k',
              '-bufsize',
              '4200k',
              '-b:a',
              '128k',
              '-hls_segment_filename',
              '${mediaroot}/${streamName}/720p/%03d.ts',
              '${mediaroot}/${streamName}/720p/index.m3u8'
            ],
            ouPaths: [
              '${mediaroot}/${streamName}/360p',
              '${mediaroot}/${streamName}/480p',
              '${mediaroot}/${streamName}/720p'
            ],
            hlsFlags: '',
            cleanup: false,
          },
        ]
      },
    };

    // Construct the NodeMediaServer
    const nms = new NodeMediaServer(config);

    // Create the maps we'll need to track the current streams.
    this.dynamicSessions = new Map();
    this.streams = new Map();

    // Start the VOD S3 file watcher and sync.
    hls.recordHls(config, this.streams);

    //
    // HLS callbacks
    //
    hls.on('newHlsStream', async (name) =&amp;gt; {
      // Create the ABR HLS playlist file.
      await abr.createPlaylist(config.http.mediaroot, name);
      // Send the "stream key" &amp;lt;-&amp;gt; "IP:PORT" mapping to Redis
      // This tells the Origin which Server has the HLS files
      await cache.set(name, SERVER_ADDRESS);
    });

    //
    // RTMP callbacks
    //
    nms.on('preConnect', (id, args) =&amp;gt; {
      logger.log('[NodeEvent on preConnect]', `id=${id} args=${JSON.stringify(args)}`);
      // Pre connect authorization
      // let session = nms.getSession(id);
      // session.reject();
    });

    nms.on('postConnect', (id, args) =&amp;gt; {
      logger.log('[NodeEvent on postConnect]', `id=${id} args=${JSON.stringify(args)}`);
    });

    nms.on('doneConnect', (id, args) =&amp;gt; {
      logger.log('[NodeEvent on doneConnect]', `id=${id} args=${JSON.stringify(args)}`);
    });

    nms.on('prePublish', (id, StreamPath, args) =&amp;gt; {
      logger.log('[NodeEvent on prePublish]', `id=${id} StreamPath=${StreamPath} args=${JSON.stringify(args)}`);
      // Pre publish authorization
      // let session = nms.getSession(id);
      // session.reject();
    });

    nms.on('postPublish', async (id, StreamPath, args) =&amp;gt; {
      logger.log('[NodeEvent on postPublish]', `id=${id} StreamPath=${StreamPath} args=${JSON.stringify(args)}`);
      if (StreamPath.indexOf('/hls/') != -1) {
        // Set the "stream key" &amp;lt;-&amp;gt; "id" mapping for this RTMP/HLS session
        // We use this when creating the DVR HLS playlist name on S3.
        const name = StreamPath.split('/').pop();
        this.streams.set(name, id);
      } else if (StreamPath.indexOf('/stream/') != -1) {
        //
        // Start Relay to youtube, facebook, and/or twitch
        //
        if (args.youtube) {
          const params = utils.getParams(args, 'youtube_');
          const query = _.isEmpty(params) ? '' : `?${querystring.stringify(params)}`;
          const url = `rtmp://a.rtmp.youtube.com/live2/${args.youtube}${query}`;
          const session = nms.nodeRelaySession({
            ffmpeg: config.relay.ffmpeg,
            inPath: `rtmp://127.0.0.1:${config.rtmp.port}${StreamPath}`,
            ouPath: url
          });
          session.id = `youtube-${id}`;
          session.on('end', (id) =&amp;gt; {
            this.dynamicSessions.delete(id);
          });
          this.dynamicSessions.set(session.id, session);
          session.run();
        }
        if (args.facebook) {
          const params = utils.getParams(args, 'facebook_');
          const query = _.isEmpty(params) ? '' : `?${querystring.stringify(params)}`;
          const url = `rtmps://live-api-s.facebook.com:443/rtmp/${args.facebook}${query}`;
          session = nms.nodeRelaySession({
            ffmpeg: config.relay.ffmpeg,
            inPath: `rtmp://127.0.0.1:${config.rtmp.port}${StreamPath}`,
            ouPath: url
          });
          session.id = `facebook-${id}`;
          session.on('end', (id) =&amp;gt; {
            this.dynamicSessions.delete(id);
          });
          this.dynamicSessions.set(session.id, session);
          session.run();
        }
        if (args.twitch) {
          const params = utils.getParams(args, 'twitch_');
          const query = _.isEmpty(params) ? '' : `?${querystring.stringify(params)}`;
          const url = `rtmp://live-jfk.twitch.tv/app/${args.twitch}${query}`;
          session = nms.nodeRelaySession({
            ffmpeg: config.relay.ffmpeg,
            inPath: `rtmp://127.0.0.1:${config.rtmp.port}${StreamPath}`,
            ouPath: url,
            raw: [
              '-c:v',
              'libx264',
              '-preset',
              'veryfast',
              '-c:a',
              'copy',
              '-b:v',
              '3500k',
              '-maxrate',
              '3750k',
              '-bufsize',
              '4200k',
              '-s',
              '1280x720',
              '-r',
              '30',
              '-f',
              'flv',
              '-max_muxing_queue_size',
              '1024',
            ]
          });
          session.id = `twitch-${id}`;
          session.on('end', (id) =&amp;gt; {
            this.dynamicSessions.delete(id);
          });
          this.dynamicSessions.set(session.id, session);
          session.run();
        }
      }
    });

    nms.on('donePublish', async (id, StreamPath, args) =&amp;gt; {
      logger.log('[NodeEvent on donePublish]', `id=${id} StreamPath=${StreamPath} args=${JSON.stringify(args)}`);
      if (StreamPath.indexOf('/hls/') != -1) {
        const name = StreamPath.split('/').pop();
        // Delete the Redis cache key for this stream
        await cache.del(name);
        // Wait a few minutes before deleting the HLS files on this Server
        // for this session
        const timeoutMs = _.isEqual(process.env.NODE_ENV, 'development') ?
          1000 : 
          2 * 60 * 1000;
        await utils.timeout(timeoutMs);
        if (!_.isEqual(await cache.get(name), SERVER_ADDRESS)) {
          // Only clean up if the stream isn't running.  
          // The user could have terminated then started again.
          try {
            // Cleanup directory
            logger.log('[Delete HLS Directory]', `dir=${join(config.http.mediaroot, name)}`);
            this.streams.delete(name);
            fs.rmdirSync(join(config.http.mediaroot, name));
          } catch (err) {
            logger.error(err);
          }
        }
      } else if (StreamPath.indexOf('/stream/') != -1) {
        //
        // Stop the Relay's
        //
        if (args.youtube) {
          let session = this.dynamicSessions.get(`youtube-${id}`);
          if (session) {
            session.end();
            this.dynamicSessions.delete(`youtube-${id}`);
          }
        }
        if (args.facebook) {
          let session = this.dynamicSessions.get(`facebook-${id}`);
          if (session) {
            session.end();
            this.dynamicSessions.delete(`facebook-${id}`);
          }
        }
        if (args.twitch) {
          let session = this.dynamicSessions.get(`twitch-${id}`);
          if (session) {
            session.end();
            this.dynamicSessions.delete(`twitch-${id}`);
          }
        }
      }
    });

    nms.on('prePlay', (id, StreamPath, args) =&amp;gt; {
      logger.log('[NodeEvent on prePlay]', `id=${id} StreamPath=${StreamPath} args=${JSON.stringify(args)}`);
      // Pre play authorization
      // let session = nms.getSession(id);
      // session.reject();
    });

    nms.on('postPlay', (id, StreamPath, args) =&amp;gt; {
      logger.log('[NodeEvent on postPlay]', `id=${id} StreamPath=${StreamPath} args=${JSON.stringify(args)}`);
    });

    nms.on('donePlay', (id, StreamPath, args) =&amp;gt; {
      logger.log('[NodeEvent on donePlay]', `id=${id} StreamPath=${StreamPath} args=${JSON.stringify(args)}`);
    });

    // Run the NodeMediaServer
    nms.run();
  } catch (err) {
    logger.log('Can\'t start app', err);
    process.exit();
  }
};
init();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Url Structure
&lt;/h3&gt;

&lt;p&gt;When calling the Application your URL would look something like this.&lt;/p&gt;

&lt;p&gt;The social query params are optional.  When present they Relay to the corresponding social destination.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rtmp://rtmp.finbits.io:1935/stream/testkeyd?twitch=&amp;lt;your twitch key&amp;gt;&amp;amp;youtube=&amp;lt;your youtube key&amp;gt;&amp;amp;facebook=&amp;lt;your facebook key&amp;gt;&amp;amp;facebook_s_bl=&amp;lt;your facebook bl&amp;gt;&amp;amp;facebook_s_sc=&amp;lt;your facebook s_sc&amp;gt;&amp;amp;facebook_s_sw=&amp;lt;your facebook sw&amp;gt;&amp;amp;facebook_s_vt=&amp;lt;your facebook vt&amp;gt;&amp;amp;facebook_a=&amp;lt;your facebook a&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  DVR
&lt;/h3&gt;

&lt;p&gt;The highest bitrate HLS is copied to S3.&lt;/p&gt;

&lt;p&gt;The bucket path looks like this:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;bucket&amp;gt;/&amp;lt;stream key&amp;gt;/vod-&amp;lt;stream id&amp;gt;.m3u8&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:6
#EXT-X-MEDIA-SEQUENCE:1590665387025
#EXTINF:6.000000,
720p/1590665387025.ts
#EXTINF:6.000000,
720p/1590665387026.ts
#EXT-X-ENDLIST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Stream Key Validation
&lt;/h3&gt;

&lt;p&gt;You can perform stream key validation on either the &lt;code&gt;preConnect&lt;/code&gt; or &lt;code&gt;prePublish&lt;/code&gt; RTMP events.  Here is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nms.on('preConnect', (id, args) =&amp;gt; {
  logger.log('[NodeEvent on preConnect]', `id=${id} args=${JSON.stringify(args)}`);
  // Pre connect authorization
  if (isInvalid) {
    let session = nms.getSession(id);
    session.reject();
  }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  NGINX
&lt;/h3&gt;

&lt;p&gt;We use NGINX as a reverse proxy to serve the static HLS files.  It has better performance than express.js.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;worker_processes  auto;

error_log /dev/stdout info;


events {
  worker_connections  1024;
}

http {
  include       /etc/nginx/mime.types;
  default_type  application/octet-stream;

  log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

  access_log /dev/stdout main;

  sendfile        on;

  keepalive_timeout  65;

  gzip on;

  ignore_invalid_headers off;

  upstream node-backend {
    server localhost:8080 max_fails=0;
  }

  server {
    listen 8000;
    server_name localhost;
    sendfile off;

    location ~ live\.m3u8 {
      add_header Cache-Control "max-age=60";
      root /usr/src/app/media;
    }

    location ~ index\.m3u8 {
      add_header Cache-Control "no-cache";
      root /usr/src/app/media;
    }

    location ~ \.ts {
      add_header Cache-Control "max-age=600";
      root /usr/src/app/media;
    }

    location /nginx_status {
      stub_status on;

      access_log off;
      allow 127.0.0.1;
      deny all;
    }

    location / {
      add_header Cache-Control "no-cache";
      proxy_pass http://node-backend/;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see we don't cache the HLS playlist files.  We do cache the ABR file and the *.ts media files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure
&lt;/h2&gt;

&lt;p&gt;This entire application runs on AWS.  Before we can spin up the Proxy, Server, and Origin Fargate services we have to create some shared infrastructure.  Here is a list of the shared infrastructure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/stacks/assets.stack.yml" rel="noopener noreferrer"&gt;assets&lt;/a&gt; - S3 Bucket&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/stacks/vpc.stack.yml" rel="noopener noreferrer"&gt;vpc&lt;/a&gt; - VPC for our Fargate services&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/stacks/ecs.stack.yml" rel="noopener noreferrer"&gt;ecs&lt;/a&gt; - ECS cluster for our Fargate services&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/stacks/security.stack.yml" rel="noopener noreferrer"&gt;security&lt;/a&gt; - Security Group's for our Fargate services&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/stacks/redis.stack.yml" rel="noopener noreferrer"&gt;redis&lt;/a&gt; - A Redis cache to store the "stream key" to "IP:PORT" mapping&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/stacks/proxy-dns.stack.yml" rel="noopener noreferrer"&gt;proxy dns&lt;/a&gt; - rtmp.finbits.io DNS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can use the &lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/stacks/stack-up.sh" rel="noopener noreferrer"&gt;stack-up.sh&lt;/a&gt; script to deploy each of these stacks to your AWS account.  You'll have to change the &lt;code&gt;PROFILE="--profile bluefin"&lt;/code&gt; to match your credentials file.&lt;/p&gt;

&lt;p&gt;Here is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sh ./stack-up.sh vpc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Service
&lt;/h2&gt;

&lt;p&gt;Now that we have the shared infrastructure up.  Let's get the Server deployed.&lt;/p&gt;

&lt;p&gt;The Server service has 2 stacks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/server/stacks/ecr.stack.yml" rel="noopener noreferrer"&gt;ecr&lt;/a&gt; - Docker image registry&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/server/stacks/service.stack.yml" rel="noopener noreferrer"&gt;service&lt;/a&gt; - Fargate service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First create the docker ECR registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sh ./stack-up.sh ecr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can build, tag, and push the Docker image to the registry.  &lt;/p&gt;

&lt;p&gt;First update the &lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/server/package.json#L9-L13" rel="noopener noreferrer"&gt;package.json&lt;/a&gt; scripts to include your AWS account id.&lt;/p&gt;

&lt;p&gt;To build, tag, and push the Docker image to the registry, run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn run deploy &amp;lt;version&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can deploy the service stack which will deploy our new image to Fargate.&lt;/p&gt;

&lt;p&gt;First update the &lt;code&gt;Version&lt;/code&gt; &lt;a href="https://github.com/rgfindl/live-streaming-server/blob/master/server/stacks/stack-up.sh#L21" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sh ./stack-up.sh service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your Server should now be running in your ECS cluster as a Fargate task.  &lt;/p&gt;

&lt;p&gt;But... you can't access it directly.  :( &lt;/p&gt;

&lt;p&gt;We need a &lt;a href="https://dev.to/blog/live-streaming-proxy"&gt;Proxy&lt;/a&gt; to route RTMP traffic to our fleet of Servers to publish RTMP.  &lt;/p&gt;

&lt;p&gt;We also need an &lt;a href="https://dev.to/blog/live-streaming-origin"&gt;Origin&lt;/a&gt; to route HTTP traffic to our fleet of Servers.&lt;/p&gt;

&lt;p&gt;Take a look at the next blog post in this 3-part series:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Part 1 - Server&lt;/li&gt;
&lt;li&gt;Part 2 - &lt;a href="https://dev.to/rgfindl/live-streaming-proxy-225b"&gt;Proxy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Part 3 - &lt;a href="https://dev.to/rgfindl/live-streaming-origin-11ka"&gt;Origin&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Original published here: &lt;a href="https://finbits.io/blog/live-streaming-server/" rel="noopener noreferrer"&gt;https://finbits.io/blog/live-streaming-server/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>livestreaming</category>
      <category>rtmp</category>
      <category>hls</category>
      <category>ffmpeg</category>
    </item>
    <item>
      <title>Headless Wordpress CMS on AWS</title>
      <dc:creator>Randy Findley</dc:creator>
      <pubDate>Sun, 04 Nov 2018 17:26:56 +0000</pubDate>
      <link>https://dev.to/rgfindl/headless-wordpress-cms-on-aws-1nol</link>
      <guid>https://dev.to/rgfindl/headless-wordpress-cms-on-aws-1nol</guid>
      <description>&lt;h2&gt;
  
  
  Quickly spin up a headless Wordpress CMS on AWS using Docker
&lt;/h2&gt;

&lt;p&gt;Headless CMS is very popular at the moment. But what is a headless CMS and why should I start using one?&lt;/p&gt;

&lt;p&gt;A headless CMS is a backend that is &lt;strong&gt;decoupled&lt;/strong&gt; from its frontends. The backend is where the content is created and published. Whereas the frontends are where the content is displayed (web, mobile apps, set-top box, Alexa, etc…).&lt;/p&gt;

&lt;p&gt;For example, a traditional CMS is a single website. The same website is used to add the content as it is to display the content. The backend and frontend are &lt;strong&gt;coupled&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A headless CMS is only used to created and publish content. That content is then available through an API. The website or mobile apps, for displaying the content, are separate.&lt;/p&gt;

&lt;p&gt;But why is decoupled better? Here are some of the reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster and more flexible content delivery than traditional CMS&lt;/li&gt;
&lt;li&gt;Resiliency in the face of changes on the user interface side (future-proof)&lt;/li&gt;
&lt;li&gt;Rapid design iterations&lt;/li&gt;
&lt;li&gt;Enhanced security&lt;/li&gt;
&lt;li&gt;Fewer publisher and developer dependencies&lt;/li&gt;
&lt;li&gt;Simpler deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;*** Thanks for the list &lt;a href="https://www.brightspot.com/blog/decoupled-cms-and-headless-cms-platforms" rel="noopener noreferrer"&gt;Brightspot&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that we all emphatically agree that headless CMS is the way to go. Let’s take a look at these awesome &lt;a href="https://github.com/rgfindl/headless-wordpress" rel="noopener noreferrer"&gt;CloudFormation templates&lt;/a&gt; I created to help you spin up and manage your headless (Wordpress) CMS.&lt;/p&gt;

&lt;p&gt;To be clear, this is a Wordpress installation running on AWS using Infrastructure as Code and Docker.&lt;/p&gt;

&lt;p&gt;Wait… Why use Wordpress and not one of these cool new headless CMS services like &lt;a href="https://contently.com/" rel="noopener noreferrer"&gt;Contently&lt;/a&gt; or &lt;a href="https://cosmicjs.com/" rel="noopener noreferrer"&gt;Cosmic JS&lt;/a&gt;? Well, these services are really great but they cost a lot of money, and I usually like to run everything myself, if I can help it. And… Wordpress is really good at managing content.&lt;/p&gt;

&lt;p&gt;But how can Wordpress be a headless CMS? Easy, it has an API.&lt;/p&gt;

&lt;p&gt;The trick is making Wordpress stateless so that it can autoscale, and run on AWS, with zero-downtime deployments.&lt;/p&gt;

&lt;p&gt;Ok, headless CMS is cool, Wordpress is pretty cool, let's continue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rgfindl/headless-wordpress" rel="noopener noreferrer"&gt;https://github.com/rgfindl/headless-wordpress&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Architecture
&lt;/h1&gt;

&lt;p&gt;Our Wordpress instance is running as an &lt;a href="https://aws.amazon.com/ecs/" rel="noopener noreferrer"&gt;Elastic Container Service&lt;/a&gt; using Docker.&lt;/p&gt;

&lt;p&gt;First, we have the &lt;a href="https://aws.amazon.com/vpc/" rel="noopener noreferrer"&gt;VPC&lt;/a&gt;, then an &lt;a href="https://aws.amazon.com/ecs/" rel="noopener noreferrer"&gt;ECS&lt;/a&gt; cluster, then an EC2 instance, and finally an ECS service with tasks. Our Wordpress service is exposed to the world via an &lt;a href="https://aws.amazon.com/elasticloadbalancing/" rel="noopener noreferrer"&gt;Elastic Load Balancer&lt;/a&gt;. We’re using &lt;a href="https://aws.amazon.com/rds/" rel="noopener noreferrer"&gt;RDS&lt;/a&gt; as our MySQL database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rgfindl/headless-wordpress/raw/master/diagram1.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4uw8pzh875ifhizf99x.png" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our Wordpress service is stateless, which means we can’t rely on the file system to store content, like Wordpress Media or Plugins. Every time an instance of our Wordpress service is spawned it will only have the files that are baked into our Docker image.&lt;/p&gt;

&lt;p&gt;Let’s take a look at how we handle Wordpress Media first.&lt;/p&gt;

&lt;p&gt;We use a plugin called &lt;a href="https://wordpress.org/plugins/amazon-s3-and-cloudfront/" rel="noopener noreferrer"&gt;WP Offload Media&lt;/a&gt;. This plugin allows us to store the Media in &lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;S3&lt;/a&gt; and use &lt;a href="https://aws.amazon.com/cloudfront/" rel="noopener noreferrer"&gt;CloudFront&lt;/a&gt; as the CDN. Take a look at the diagram below. We also use the same CDN to cache the Wordpress API…&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/rgfindl/headless-wordpress/raw/master/diagram2.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0q6hdr227fdxsu6nj37d.png" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now how do we handle the Plugins? (We can ignore Templates because this is headless 😃)&lt;/p&gt;

&lt;p&gt;Remember when I talked about baking things into our Docker image? That’s it… We have to include the Plugins in our Docker image. Let’s take a look at that Dockerfile and go through it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM wordpress:latest

# Install unzip
RUN apt-get update; \
    apt-get install -y --no-install-recommends unzip

# Install WP plugins
RUN curl -L https://downloads.wordpress.org/plugin/amazon-s3-and-cloudfront.2.0.zip -o /tmp/amazon-s3-and-cloudfront.2.0.zip
RUN unzip /tmp/amazon-s3-and-cloudfront.2.0.zip -d /usr/src/wordpress/wp-content/plugins
RUN rm /tmp/amazon-s3-and-cloudfront.2.0.zip

RUN curl -L https://downloads.wordpress.org/plugin/advanced-custom-fields.5.7.7.zip -o /tmp/advanced-custom-fields.5.7.7.zip
RUN unzip /tmp/advanced-custom-fields.5.7.7.zip -d /usr/src/wordpress/wp-content/plugins
RUN rm /tmp/advanced-custom-fields.5.7.7.zip

RUN curl -L https://downloads.wordpress.org/plugin/custom-post-type-ui.1.5.8.zip -o /tmp/custom-post-type-ui.1.5.8.zip
RUN unzip /tmp/custom-post-type-ui.1.5.8.zip -d /usr/src/wordpress/wp-content/plugins
RUN rm /tmp/custom-post-type-ui.1.5.8.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see our Dockerfile is really simple. It extends the latest Wordpress image and then installs 3 Plugins. It downloads, unzips, and copies each plugin to the wordpress/wp-content/ directory. When you launch your Wordpress site for the first time you’ll have to activate these plugins. The activation status is stored in MySQL so you won’t have to do it every time your ECS tasks recycle.&lt;/p&gt;

&lt;h1&gt;
  
  
  Installation
&lt;/h1&gt;

&lt;p&gt;Alright, let’s get this architecture installed. First a few prerequisites.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Install the following prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS Account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://console.aws.amazon.com/ec2/v2/home" rel="noopener noreferrer"&gt;EC2 Key Pair&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;cim - (&lt;code&gt;npm install -g cim&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/installing.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Stacks
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;VPC&lt;/li&gt;
&lt;li&gt;ECS&lt;/li&gt;
&lt;li&gt;RDS&lt;/li&gt;
&lt;li&gt;ECR&lt;/li&gt;
&lt;li&gt;Wordpress&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  VPC
&lt;/h3&gt;

&lt;p&gt;This creates the &lt;a href="https://aws.amazon.com/vpc/" rel="noopener noreferrer"&gt;Amazon Virtual Private Cloud&lt;/a&gt; that our ECS cluster and RDS database will run in.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd vpc
cim stack-up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ECS
&lt;/h3&gt;

&lt;p&gt;This creates an &lt;a href="https://aws.amazon.com/ecs/" rel="noopener noreferrer"&gt;Elastic Container Service&lt;/a&gt; that our EC2's will run in.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd vpc
cim stack-up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  RDS
&lt;/h3&gt;

&lt;p&gt;This creates a &lt;a href="https://aws.amazon.com/rds/" rel="noopener noreferrer"&gt;Relational Database Service&lt;/a&gt; database cluster that our Wordpress application will use.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd rds
export DatabaseUsername="???"; export DatabasePassword="???"; cim stack-up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ECR
&lt;/h3&gt;

&lt;p&gt;This creates an &lt;a href="https://aws.amazon.com/ecr/" rel="noopener noreferrer"&gt;Elastic Container Registry&lt;/a&gt; that will hold the docker images of our wordpress service.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Amazon Elastic Container Registry (ECR) is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ecr
cim stack-up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Wordpress
&lt;/h3&gt;

&lt;p&gt;Before we can launch this cloudformation stack.  We need to push our service image to ECR.&lt;/p&gt;

&lt;h4&gt;
  
  
  Push Image
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd wordpress/src
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;a href="http://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html#registry_auth" rel="noopener noreferrer"&gt;Registry Authentication&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;aws ecr get-login --registry-ids &amp;lt;account-id&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;copy/past output to perform docker login,  also append &lt;code&gt;/headless-wp&lt;/code&gt; to the repository url.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Build Image

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;docker build -t headless-wp:&amp;lt;version&amp;gt; .&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;a href="http://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html" rel="noopener noreferrer"&gt;Push Image&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;docker tag headless-wp:&amp;lt;version&amp;gt; &amp;lt;account-id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/headless-wp:latest&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;docker tag headless-wp:&amp;lt;version&amp;gt; &amp;lt;account-id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/headless-wp:&amp;lt;version&amp;gt;&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;docker push &amp;lt;account-id&amp;gt;.dkr.ecr.&amp;lt;region&amp;gt;.amazonaws.com/headless-wp&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Update Version
&lt;/h4&gt;

&lt;p&gt;Make sure the &lt;code&gt;Version&lt;/code&gt; parameter, in _cim.yml, matches the &lt;code&gt;version&lt;/code&gt; tag from above.  The ECS Task Definition will pull the image from ECR.&lt;/p&gt;

&lt;h4&gt;
  
  
  Stack up
&lt;/h4&gt;

&lt;p&gt;Once the &lt;code&gt;Version&lt;/code&gt; is set you can use &lt;code&gt;cim stack-up&lt;/code&gt; to update the stack with the new version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd wordpress
cim stack-up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations, your new Wordpress site is now available.  &lt;/p&gt;

&lt;p&gt;First run through the Wordpress setup wizard.&lt;/p&gt;

&lt;p&gt;Next enable some of the plugins we added.&lt;/p&gt;

&lt;p&gt;Add a few blog posts and pages.&lt;/p&gt;

&lt;p&gt;Then check out the API. Ex: &lt;code&gt;https://&amp;lt;cdn-url&amp;gt;/wp-json/wp/v2/posts&lt;/code&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Environment Variables
&lt;/h1&gt;

&lt;p&gt;Congratulations on getting your Headless Wordpress installed. If you got stuck anywhere along the way please don’t hesitate to reach out to me for help.&lt;/p&gt;

&lt;p&gt;One thing I want to explain is the Wordpress environment variables because they really tie everything together. They tell our Wordpress installation about the RDS database, the Media S3 bucket, and CloudFront CDN URL. Let’s take a look. These can be found in the Wordpress stack’s &lt;a href="https://github.com/rgfindl/headless-wordpress/blob/master/wordpress/wp.stack.yml#L233" rel="noopener noreferrer"&gt;AWS::ECS::TaskDefinition&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Environment:
  - Name: AWS_REGION
    Value: !Ref AWS::Region
  - Name: AWS_ACCOUNT_ID
    Value: !Ref AWS::AccountId
  - Name: WORDPRESS_DB_HOST
    Value:
      Fn::ImportValue:
        !Sub "${RDSStack}-RDSClusterEndpoint"
  - Name: WORDPRESS_DB_USER
    Value:
      Fn::ImportValue:
        !Sub "${RDSStack}-DatabaseUsername"
  - Name: WORDPRESS_DB_PASSWORD
    Value:
      Fn::ImportValue:
        !Sub "${RDSStack}-DatabasePassword"
  - Name: WORDPRESS_CONFIG_EXTRA
    Value: !Sub
      - |
        define( 'AS3CF_AWS_USE_EC2_IAM_ROLE', true );
        define( 'AS3CF_SETTINGS', serialize( array(
          'bucket' =&amp;gt; '${MediaBucket}',
          'copy-to-s3' =&amp;gt; true,
          'serve-from-s3' =&amp;gt; true,
          'domain' =&amp;gt; 'cloudfront',
          'cloudfront' =&amp;gt; '${DomainName}',
          'enable-object-prefix' =&amp;gt; true,
          'object-prefix' =&amp;gt; 'wp-content/uploads/',
          'force-https' =&amp;gt; ${ForceHttps},
          'remove-local-file' =&amp;gt; true
        ) ) );
        define( 'WP_HOME', '${CMSUrl}' );
        define( 'WP_SITEURL', '${CMSUrl}' );
      - {
        MediaBucket: !Ref MediaBucket,
        DomainName: !If [UseCustomDomain, !Ref Domain, !GetAtt CDN.DomainName],
        ForceHttps: !If [UseCustomDomain, 'true', 'false'],
        CMSUrl: !If [UseCustomDomain, { "Fn::ImportValue" : {"Fn::Sub": "${ECSStack}-CustomDomainUrl" } }, {"Fn::Sub": ["http://${Url}", {"Url": { "Fn::ImportValue" : {"Fn::Sub": "${ECSStack}-LoadBalancerUrl" } }}]}]
        }
  - Name: WORDPRESS_DEBUG
    Value: '1'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The WORDPRESS_DB_* variables are straight from the RDS stack. CloudFormation allows you to Export Output variables which can be imported in other stacks.&lt;/p&gt;

&lt;p&gt;The WORDPRESS_CONFIG_EXTRA variable is where we configure the &lt;a href="https://wordpress.org/plugins/amazon-s3-and-cloudfront/" rel="noopener noreferrer"&gt;WP Offload Media&lt;/a&gt; plugin. First we tell it to use our Task Role, &lt;a href="https://github.com/rgfindl/headless-wordpress/blob/master/wordpress/wp.stack.yml#L208" rel="noopener noreferrer"&gt;AWS::IAM::Role&lt;/a&gt;, via the AS3CF_AWS_USE_EC2_IAM_ROLE var. Then we the AS3CF_SETTINGS var to setup the plugin.&lt;/p&gt;

&lt;p&gt;Thanks for reading. I hope you enjoyed it!&lt;/p&gt;

</description>
      <category>cms</category>
      <category>aws</category>
      <category>docker</category>
      <category>wordpress</category>
    </item>
  </channel>
</rss>
