<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alex Vallejo</title>
    <description>The latest articles on DEV Community by Alex Vallejo (@seojeek).</description>
    <link>https://dev.to/seojeek</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/seojeek"/>
    <language>en</language>
    <item>
      <title>Google Managed SSL Certificates on Kubernetes</title>
      <dc:creator>Alex Vallejo</dc:creator>
      <pubDate>Mon, 07 Mar 2022 22:53:48 +0000</pubDate>
      <link>https://dev.to/seojeek/google-managed-ssl-certificates-on-kubernetes-11po</link>
      <guid>https://dev.to/seojeek/google-managed-ssl-certificates-on-kubernetes-11po</guid>
      <description>&lt;h1&gt;
  
  
  Google Managed SSL Certificates on Kubernetes
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;: This overview provides a straightforward path for installing Google-managed SSL Certificates on your GKE-hosted application. This assumes you've created a &lt;a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app#step_5_deploy_your_application" rel="noopener noreferrer"&gt;Deployment&lt;/a&gt; which runs your uploaded Docker image. It also assumes you have the &lt;a href="https://cloud.google.com/pubsub/docs/quickstart-cli" rel="noopener noreferrer"&gt;gcloud&lt;/a&gt; command-line tool installed as we'll be working with that to perform our network configurations right from our terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.evernote.com%2Fshard%2Fs78%2Fsh%2F85843c49-cc6e-464a-9538-33eefa7d3710%2F5c85ed86b443773886445630eb8e8c8b%2Fres%2F7ba04be3-4aaf-4cfb-b849-ffeb582a7343" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.evernote.com%2Fshard%2Fs78%2Fsh%2F85843c49-cc6e-464a-9538-33eefa7d3710%2F5c85ed86b443773886445630eb8e8c8b%2Fres%2F7ba04be3-4aaf-4cfb-b849-ffeb582a7343" alt="Alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SSL Certificate deployments can range from a simple &lt;a href="https://letsencrypt.org/getting-started/" rel="noopener noreferrer"&gt;certbot&lt;/a&gt; to a managed wildcard certificate with manual installation. For a Google Cloud hosted application on Kubernetes, you can certainly install and manage your own certificates through the platform, or you can use a Google-managed SLL certificate which will manage the provisioning and autorenewal for you. It's actually extremely easy to do:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;managed-cert.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;networking.gke.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ManagedCertificate&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myCert&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;domains&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;myDomain.com&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create the cert&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f managed-cert.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note the only downside is that only one domain name is permitted for each Google-managed SSL certificate.&lt;/p&gt;

&lt;p&gt;That's it, your SSL certificate is now registered with a domain in the Google Cloud. Next we'll attribute the certificate to an Ingress service which will route our traffic for our domain. We use an Ingress object to define route mapping rules for routing HTTP and HTTPS traffic. It essentially creates an HTTPS load balancer to route all our traffic to the appropriate services.&lt;/p&gt;

&lt;p&gt;We'll eventually want our domain hitting a static IP address so we'll reserve one and name it something we can reference:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gcloud compute addresses create myApp-ip --global&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can reference this IP through &lt;code&gt;gcloud compute addresses describe myApp-ip --global&lt;/code&gt; or you can navigate to &lt;code&gt;VPC Console / External IP addresses&lt;/code&gt; and find the IP listed as Static. You can now point your DNS A Record to this IP address, however we'll need to create an Ingress object to map our HTTP and HTTPS traffic.&lt;/p&gt;

&lt;p&gt;Before we create our Ingress, we'll be creating a NodePort which provides a gateway port between our public-facing Ingress controller to our cluster's application. A NodePort is, in Google terminology, a &lt;em&gt;Service&lt;/em&gt; which simply connects one pod to another. Depending on what port our application is listening on, we can map it to our Ingress via a NodePort. Our NodePort can map directly to our Workload and the cluster will autoscale accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;nodeport.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myApp-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
    &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myApp-workload&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myApp-port&lt;/span&gt;
          &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
          &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
          &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create the NodePort&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f nodeport.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;targetPort&lt;/code&gt; is whatever port our application is listening on. Because our Ingress will route traffic on port 80, we'll perform the mapping as such. Lastly, we'll configure the Ingress object which will tie this all together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ingress.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;extensions/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Ingress&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myApp-ingress&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/ingress.global-static-ip-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;showcase-mde-static&lt;/span&gt;
    &lt;span class="na"&gt;networking.gke.io/managed-certificates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;moviedecisionengine&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myDomain.com&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
            &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myApp-service&lt;/span&gt;
              &lt;span class="na"&gt;servicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myApp-port&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;kubectl apply -f ingress.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This is exciting. We've deployed our Ingress object and we're ready to check the provisioning status of our SSL certificate. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl describe managedcertificate myCert&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It may take up to 15 minutes for our SSL certificate to be provisioned on the server. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>NGINX Ingress on Kubernetes</title>
      <dc:creator>Alex Vallejo</dc:creator>
      <pubDate>Mon, 07 Mar 2022 22:41:21 +0000</pubDate>
      <link>https://dev.to/seojeek/nginx-ingress-on-kubernetes-34ac</link>
      <guid>https://dev.to/seojeek/nginx-ingress-on-kubernetes-34ac</guid>
      <description>&lt;h1&gt;
  
  
  NGINX Ingress on Kubernetes
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.nginx.com/products/nginx/kubernetes-ingress-controller/"&gt;https://www.nginx.com/products/nginx/kubernetes-ingress-controller/&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By default, pods of Kubernetes services are not accessible from the external network, but only by other pods within the Kubernetes cluster. Kubernetes has a built‑in configuration for HTTP load balancing, called Ingress, that defines rules for external connectivity to Kubernetes services. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Ingress &lt;strong&gt;controller&lt;/strong&gt; can then automatically program a frontend load balancer to enable Ingress configuration. The NGINX Ingress Controller for Kubernetes is what enables Kubernetes to configure NGINX and NGINX Plus for load balancing Kubernetes services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy Ingress NGINX
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.github.io/ingress-nginx/deploy/"&gt;https://kubernetes.github.io/ingress-nginx/deploy/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, we'll need to initialize our user as a cluster-admin with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole cluster-admin \
  --user $(gcloud config get-value account)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can deploy to our cluster via:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Running a check on the service might show a Pending status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alexvallejo:showcase-mde$ kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-7b887b65b7-d99kh   0/1     Pending   0          22m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Quick Google search shows a potential fix that should address that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kubernetes/ingress-nginx/issues/4775#issuecomment-558381452"&gt;https://github.com/kubernetes/ingress-nginx/issues/4775#issuecomment-558381452&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl label node --all kubernetes.io/os=linux&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.rcmmd.com/www-to-non-www-with-kubernetes/"&gt;https://blog.rcmmd.com/www-to-non-www-with-kubernetes/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To the Ingress service yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;nginx.ingress.kubernetes.io/ssl-redirect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;true'&lt;/span&gt;

      &lt;span class="s"&gt;// this is the required part&lt;/span&gt;
      &lt;span class="na"&gt;nginx.ingress.kubernetes.io/from-to-www-redirect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;true'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More Resources:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://devopscube.com/setup-ingress-kubernetes-nginx-controller/"&gt;https://devopscube.com/setup-ingress-kubernetes-nginx-controller/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke"&gt;https://cloud.google.com/community/tutorials/nginx-ingress-gke&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/google-cloud/setting-up-google-cloud-with-kubernetes-nginx-ingress-and-lets-encrypt-certmanager-bf134b7e406e"&gt;https://medium.com/google-cloud/setting-up-google-cloud-with-kubernetes-nginx-ingress-and-lets-encrypt-certmanager-bf134b7e406e&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/48763805/does-gke-support-nginx-ingress-with-static-ip"&gt;https://stackoverflow.com/questions/48763805/does-gke-support-nginx-ingress-with-static-ip&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>ingress</category>
      <category>nginx</category>
    </item>
    <item>
      <title>Phoenix deploys with Elixir 1.9 with systemd (no Docker)</title>
      <dc:creator>Alex Vallejo</dc:creator>
      <pubDate>Thu, 21 May 2020 16:50:05 +0000</pubDate>
      <link>https://dev.to/seojeek/phoenix-deploys-with-elixir-1-9-with-systemd-no-docker-1od0</link>
      <guid>https://dev.to/seojeek/phoenix-deploys-with-elixir-1-9-with-systemd-no-docker-1od0</guid>
      <description>&lt;p&gt;Elixir 1.9 was released in June 2019 and with it came the exciting feature of &lt;strong&gt;releases&lt;/strong&gt;, which allows for the compilation of your Phoenix application into a releases directory that you can run your application on through a executable command. &lt;/p&gt;

&lt;p&gt;The purpose of this article is to deploy a Phoenix app to a server without relying on Docker containers, third-party hosting platforms (e.g. Heroku, Gigalixir). Prerequisites: You should be able to ssh into your host server with sudo access and be comfortable writing environment variables and navigating a Unix environment.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Prior to 1.9, Elixir apps relied on some additional programs to compile the app for deployment. Distillery would be used for the compilation of the application, and could be used in conjunction with &lt;a href="https://github.com/edeliver/edeliver"&gt;edeliver&lt;/a&gt; to ssh into the target host server, run the build, download the build locally, and then deploy and run the app on the same server.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Testing our release locally
&lt;/h4&gt;

&lt;p&gt;The documentation is pretty robust for &lt;a href="https://hexdocs.pm/phoenix/up_and_running.html#content"&gt;bootstrapping&lt;/a&gt; a Phoenix app with Elixir 1.9. The &lt;a href="https://hexdocs.pm/phoenix/deployment.html"&gt;deployment&lt;/a&gt; documentation also sets us up for &lt;a href="https://hexdocs.pm/phoenix/releases.html"&gt;releasing&lt;/a&gt; our app in staging and production environments. These steps are quite important and can allow for the deployment of not just our application but also: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handling environment variables at runtime&lt;/li&gt;
&lt;li&gt;Environment-specific configurations (&lt;code&gt;dev.exs&lt;/code&gt; and &lt;code&gt;prod.exs&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Running database migrations every time we build a release&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm not going to get into the details of writing our config files because these things are pretty well documented. When our configurations are all set up for production, we should be able to run the following and see the normal runtime server log and make sure the app is accessible at &lt;code&gt;localhost:4000&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;MIX_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod mix release
_build/prod/rel/my_app/bin/my_app start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Assuming this successfully runs locally, we can then move on to our remote host setup!&lt;/p&gt;

&lt;h4&gt;
  
  
  Remote Setup
&lt;/h4&gt;

&lt;p&gt;We'll want to perform the following steps on our remote server to make sure we can start pulling code from our git repo and build the executable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install postgres if not installed and create a database for you application (if not stateless).&lt;/li&gt;
&lt;li&gt;Create a &lt;code&gt;deploy&lt;/code&gt; user (&lt;code&gt;adduser deploy&lt;/code&gt;) that handles both ssh connections to our local machine and our git repository. Add ssh keys to `/home/deploy/.ssh/authorized_keys'.&lt;/li&gt;
&lt;li&gt;Establish a directory that our build will live in, let's say &lt;code&gt;/home/deploy/myApp&lt;/code&gt; and pull down the latest code to that directory.&lt;/li&gt;
&lt;li&gt;Install Elixir 1.9 and Erlang on our remote server
&lt;code&gt;wget https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb &amp;amp;&amp;amp; sudo dpkg -i erlang-solutions_1.0_all.deb&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;mix&lt;/code&gt; to make sure Elixir is installed properly&lt;/li&gt;
&lt;li&gt;Navigate to our application directory, git pull our phoenix app, and build a release:
&lt;code&gt;MIX_ENV=prod mix release&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test the release out by running &lt;code&gt;_build/prod/rel/myApp/bin/myApp start&lt;/code&gt;. Is it running as normal? Can you navigate to &lt;code&gt;yourDomain:4000&lt;/code&gt; and see your app? If yes, great! If not, try to work out what is wrong with the configuration. Is the port open on your server? There could be a host (pun) of reasons why our server isn't serving our app correctly so that's outside of this tutorial.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that our application can successfully run, we need a way to make it run &lt;strong&gt;in the background&lt;/strong&gt;!! The documentation assumes we can just run a Docker container which is suitable for a lot of use cases, particularly when it comes to scaling, or building clusters via Kubernetes, etc. However, it should be perfectly suitable to build and host a Phoenix app without that isolated environment. You shouldn't &lt;em&gt;need&lt;/em&gt; Docker containers to run your app! So let's continue.&lt;/p&gt;

&lt;h4&gt;
  
  
  Build script
&lt;/h4&gt;

&lt;p&gt;The next step I highly recommend for Phoenix deploys is writing a build script. The documentation briefly covers this through the perspective of a &lt;a href="https://hexdocs.pm/phoenix/releases.html#containers"&gt;Docker file&lt;/a&gt; but we can break this down into a simple bash script that we can run whenever we want to run a release on our remote server.&lt;/p&gt;

&lt;pre&gt;
#!/usr/bin/env bash
# exit on error
set -o errexit

# Initial setup
mix deps.get --only prod
MIX_ENV=prod mix compile

# Compile assets
npm install --prefix ./assets
npm run deploy --prefix ./assets
mix phx.digest

# Build the release and overwrite the existing release directory
MIX_ENV=prod mix release --overwrite

# Perform any migrations necessary
_build/prod/rel/myApp/bin/myApp eval "MyApp.Release.migrate"
&lt;/pre&gt;

&lt;p&gt;This is pretty straightforward. All we're doing is installing our mix dependencies, installing our npm packages, buiding our release, and running any migrations needed. Depending on our CI integration we can adopt this script to run a git pull beforehand too or adjust to however you want CI to be handled.&lt;/p&gt;

&lt;h4&gt;
  
  
  systemd
&lt;/h4&gt;

&lt;p&gt;And this brings us to &lt;code&gt;systemd&lt;/code&gt;. While the &lt;a href="https://www.linux.com/tutorials/understanding-and-using-systemd/"&gt;history&lt;/a&gt; of systemd is somewhat controversial, it has been the de facto system and service manager for Linux distributions since 2014 and is super easy to set up and initialize our application with.&lt;/p&gt;

&lt;p&gt;For the basic application I want to set up, it accomplishes three critical things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs the application in the background&lt;/li&gt;
&lt;li&gt;Auto initializes the application if the server reboots&lt;/li&gt;
&lt;li&gt;Provides logs for debugging and reference&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm sure the use cases for a more complex configuration are out there, but my goal right now is limited to the above.&lt;/p&gt;

&lt;p&gt;There actually is a &lt;a href="https://github.com/cogini/mix_systemd"&gt;mix library&lt;/a&gt; that allows you to generate a systemd unit file for your Phoenix application but it's simple enough that you can write one from scratch or copy and paste the following to &lt;code&gt;/etc/systemd/system/myapp.service&lt;/code&gt;:&lt;/p&gt;

&lt;pre&gt;
[Unit]
Description=myApp service
After=local-fs.target network.target

[Service]
Type=simple
User=deploy
Group=deploy
WorkingDirectory=/home/deploy/build/myApp/_build/prod/rel/myApp
ExecStart=/home/deploy/build/myApp/_build/prod/rel/myApp/bin/myApp start
ExecStop=/home/deploy/build/myApp/_build/prod/rel/myApp/bin/myApp stop
EnvironmentFile=/etc/default/myApp.env
Environment=LANG=en_US.utf8
Environment=MIX_ENV=prod


Environment=PORT=4000
LimitNOFILE=65535
UMask=0027
SyslogIdentifier=myApp
Restart=always


[Install]
WantedBy=multi-user.target
&lt;/pre&gt;

&lt;p&gt;We'll want to additionally create a &lt;code&gt;myApp.env&lt;/code&gt; file at &lt;code&gt;/etc/default&lt;/code&gt; so that our service can use runtime environment variables for our application. In my file I simply have the following:&lt;/p&gt;

&lt;pre&gt;
PORT=4000
HOSTNAME="myApp.io"
SECRET_KEY_BASE="[output of mix phx.gen.secret]"
DATABASE_URL="ecto://postgres:password@myApp.io/[dbName]"
&lt;/pre&gt;

&lt;p&gt;Now that our &lt;code&gt;systemd&lt;/code&gt; is configured, we can start our service. Here are some useful commands to play around with:&lt;/p&gt;

&lt;p&gt;After making changes to the systemd unit files:&lt;br&gt;
&lt;code&gt;sudo systemctl daemon-reload&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Start your service:&lt;br&gt;
&lt;code&gt;sudo systemctl start myapp.service&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;List systemd services:&lt;br&gt;
&lt;code&gt;systemctl list-units --type=service&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Check the status of your service:&lt;br&gt;
&lt;code&gt;systemctl status myapp.service&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This one is key because it will tell you if the service was able to start successfully. If it has, your app should be up and running! If you app isn't properly running we can check the logs via this command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo journalctl -f -u myapp.service&lt;/code&gt;&lt;br&gt;
&lt;code&gt;sudo journalctl -u myapp.service --since today&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Seems to be a very flexible logging service as well.[^1](&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs"&gt;https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs&lt;/a&gt;)&lt;/p&gt;

&lt;blockquote&gt;
&lt;h4&gt;
  
  
  Environment Variables
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;systemd&lt;/code&gt; scripts are executed normally at the root level, so the env vars need to be acccessible by root. Originally I had my &lt;code&gt;deploy@dayoff.io&lt;/code&gt; env vars set for the &lt;code&gt;deploy&lt;/code&gt; user but my &lt;code&gt;systemd&lt;/code&gt; didn't have access to them. &lt;/p&gt;

&lt;p&gt;To set my env vars for runtime, I placed them in &lt;code&gt;/etc/default/dayoff.env&lt;/code&gt;, which included my &lt;code&gt;PORT&lt;/code&gt;, &lt;code&gt;DATABASE_URL&lt;/code&gt;, and &lt;code&gt;SECRET_KEY_BASE&lt;/code&gt;. I'm still a little bit uncertain if you need your vars accessible at the build directory by the &lt;code&gt;deploy&lt;/code&gt; user so you may additionally need to &lt;code&gt;export&lt;/code&gt; these vars at &lt;code&gt;~/.profile&lt;/code&gt; or &lt;code&gt;~/.bash_profile&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Thanks for reading and I hope to get an article out next time around setting up nginx on the server (and maybe Nginx Ingress in Kubernetes).&lt;/p&gt;

&lt;h2&gt;
  
  
  Useful Links
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://hexdocs.pm/phoenix/releases.html#containers"&gt;https://hexdocs.pm/phoenix/releases.html#containers&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/cogini/mix_systemd"&gt;https://github.com/cogini/mix_systemd&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/cogini/mix-deploy-example"&gt;https://github.com/cogini/mix-deploy-example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://render.com/docs/deploy-phoenix"&gt;https://render.com/docs/deploy-phoenix&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/SolarisJapan/lunaris-wiki/wiki/Deploy-Phoenix-Project"&gt;https://github.com/SolarisJapan/lunaris-wiki/wiki/Deploy-Phoenix-Project&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/phoenixframework/phoenix/blob/master/guides/deployment/releases.md#runtime-configuration"&gt;https://github.com/phoenixframework/phoenix/blob/master/guides/deployment/releases.md#runtime-configuration&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/davoclavo/61b9d84f2248f182c95ae7738490ddd1"&gt;https://gist.github.com/davoclavo/61b9d84f2248f182c95ae7738490ddd1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://hexdocs.pm/phoenix/releases.html#ecto-migrations-and-custom-commands"&gt;https://hexdocs.pm/phoenix/releases.html#ecto-migrations-and-custom-commands&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://elixirforum.com/t/elixir-apps-as-systemd-services-info-wiki/2400"&gt;https://elixirforum.com/t/elixir-apps-as-systemd-services-info-wiki/2400&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/@mcsonique/deploying-elixir-phoenix-projects-to-production-44a236c643c"&gt;https://medium.com/@mcsonique/deploying-elixir-phoenix-projects-to-production-44a236c643c&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units"&gt;https://www.digitalocean.com/community/tutorials/how-to-use-systemctl-to-manage-systemd-services-and-units&lt;/a&gt;&lt;/p&gt;

</description>
      <category>elixir</category>
      <category>phoenix</category>
    </item>
  </channel>
</rss>
