<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Scout APM</title>
    <description>The latest articles on DEV Community by Scout APM (@scoutapm).</description>
    <link>https://dev.to/scoutapm</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/scoutapm"/>
    <language>en</language>
    <item>
      <title>Securing Ruby Applications with mTLS</title>
      <dc:creator>Jack Rothrock</dc:creator>
      <pubDate>Tue, 08 Aug 2023 19:04:09 +0000</pubDate>
      <link>https://dev.to/scoutapm/securing-ruby-applications-with-mtls-1gen</link>
      <guid>https://dev.to/scoutapm/securing-ruby-applications-with-mtls-1gen</guid>
      <description>&lt;h2&gt;
  
  
  TLDR
&lt;/h2&gt;

&lt;p&gt;TLDR: Implementing mTLS in Ruby is a pretty straightforward process, and a dockerized local example of this, using self signed certificates, can be found at the following link: &lt;a href="https://github.com/scoutapp/mtls_example" rel="noopener noreferrer"&gt;https://github.com/scoutapp/mtls_example&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Additionally, we didn’t need to make any changes to our infrastructure, except for adding the certificate and keys to the entities originating the requests from our private subnets. &lt;/p&gt;

&lt;p&gt;See below for troubleshooting techniques which can be useful when setting up mTLS.&lt;/p&gt;

&lt;h2&gt;
  
  
  About mTLS
&lt;/h2&gt;

&lt;p&gt;Here at &lt;a href="https://scoutapm.com/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;Scout&lt;/a&gt; we recently released the ability to perform mTLS with our webhook alerting. Before jumping into how we implemented mTLS, a quick recap on the what and the why of mTLS would be beneficial. &lt;/p&gt;

&lt;p&gt;Mutual Transport Layer Security (mTLS), unlike plain TLS, does a double ID check - both client and server need to show their certificates. Ultimately this helps validate where the traffic is originating from and can prevent man-in-the-middle attacks. &lt;/p&gt;

&lt;p&gt;Since we are utilizing TLS, we also know that these messages are sent in an encrypted manner.&lt;/p&gt;

&lt;p&gt;Most of our APM services are written in Ruby. As such, this article is more tailored to how we approached implementing this from a perspective of Ruby, as well as lightly going over how this played out in our systems architecture. &lt;/p&gt;

&lt;p&gt;With that being said, most of this should be applicable to other languages and system designs. &lt;/p&gt;

&lt;p&gt;Additionally, we will lay out some strategies for troubleshooting potential issues that may arise when setting up mTLS.&lt;/p&gt;

&lt;p&gt;Let’s quickly take a look at what goes into performing an mTLS request, and to keep things simple we will be doing this with our own self signed certs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing mTLS with self signed certificates:
&lt;/h2&gt;

&lt;p&gt;This section gives a pretty high level overview of what goes into creating the root CA certificate as well as the leaf client and server certs. However, Mutual TLS Authentication De-Mystified by John Tucker gives a more in depth explanation on this subject:&lt;br&gt;
&lt;a href="https://codeburst.io/mutual-tls-authentication-mtls-de-mystified-11fa2a52e9cf" rel="noopener noreferrer"&gt;https://codeburst.io/mutual-tls-authentication-mtls-de-mystified-11fa2a52e9cf&lt;/a&gt;. These commands can also be found in the &lt;code&gt;create_certs.sh&lt;/code&gt; file in the repo above.&lt;/p&gt;

&lt;p&gt;Before we create the client and server certs we will first need to create the root certificate. From this root CA certificate we can sign the leaf certificates, establishing the chain of trust. &lt;/p&gt;

&lt;p&gt;When creating the root certificate, we are going to specify that we are creating a self signed x509 certificate (-x509), without the private key being encrypted (-nodes) valid for 365 days, with the common name (CN) / the identity of the entity being my-ca.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl req &lt;span class="nt"&gt;-new&lt;/span&gt;   &lt;span class="nt"&gt;-x509&lt;/span&gt;  &lt;span class="nt"&gt;-nodes&lt;/span&gt;  &lt;span class="nt"&gt;-days&lt;/span&gt; 365  &lt;span class="nt"&gt;-subj&lt;/span&gt; &lt;span class="s1"&gt;'/CN=my-ca'&lt;/span&gt;  &lt;span class="nt"&gt;-keyout&lt;/span&gt; ca.key  &lt;span class="nt"&gt;-out&lt;/span&gt; ca.crt &lt;span class="nt"&gt;-sha256&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we create the server cert, we will need to create a private key as well as a Certificate Signing Request (CSR), and we are going to name the identity as ‘localhost’. &lt;/p&gt;

&lt;p&gt;The CSR is used by the CA to confirm identity, such as the organization or domain that this certificate is being requested for. Since we are the CA in this case, I say it looks pretty pretty good/legit. Other servers that don’t contain our root CA will probably think they aren’t! More on that in a bit.&lt;/p&gt;

&lt;p&gt;Now let’s generate the server certificate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create the server private key&lt;/span&gt;
openssl genrsa &lt;span class="nt"&gt;-out&lt;/span&gt; server.key 2048

&lt;span class="c"&gt;# Create the server certificate signing request (CSR)&lt;/span&gt;
openssl req &lt;span class="nt"&gt;-new&lt;/span&gt; &lt;span class="nt"&gt;-key&lt;/span&gt; server.key &lt;span class="nt"&gt;-subj&lt;/span&gt; &lt;span class="s1"&gt;'/CN=localhost'&lt;/span&gt; &lt;span class="nt"&gt;-out&lt;/span&gt; server.csr &lt;span class="nt"&gt;-sha256&lt;/span&gt;

&lt;span class="c"&gt;# Use the CA key to sign the server CSR and get back the signed certificate. Localhost&lt;/span&gt;
&lt;span class="c"&gt;# looks like a legit identity!&lt;/span&gt;
openssl x509 &lt;span class="nt"&gt;-req&lt;/span&gt; &lt;span class="nt"&gt;-in&lt;/span&gt; server.csr &lt;span class="nt"&gt;-CA&lt;/span&gt; ca.crt &lt;span class="nt"&gt;-CAkey&lt;/span&gt; ca.key &lt;span class="nt"&gt;-CAcreateserial&lt;/span&gt; &lt;span class="nt"&gt;-out&lt;/span&gt; server.crt &lt;span class="nt"&gt;-days&lt;/span&gt; 365 &lt;span class="nt"&gt;-sha256&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let’s generate the client cert. This is very similar to the creation of the server certs, first creating the private key then the CSR.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create the client private key&lt;/span&gt;
openssl genpkey &lt;span class="nt"&gt;-algorithm&lt;/span&gt; RSA &lt;span class="nt"&gt;-out&lt;/span&gt; client.key

&lt;span class="c"&gt;# Create the client certificate signing request (CSR)&lt;/span&gt;
openssl req &lt;span class="nt"&gt;-new&lt;/span&gt; &lt;span class="nt"&gt;-key&lt;/span&gt; client.key &lt;span class="nt"&gt;-subj&lt;/span&gt; &lt;span class="s1"&gt;'/CN=client'&lt;/span&gt; &lt;span class="nt"&gt;-out&lt;/span&gt; client.csr &lt;span class="nt"&gt;-sha256&lt;/span&gt;

&lt;span class="c"&gt;# Use the CA key to sign the client CSR and get back the signed certificate&lt;/span&gt;
openssl x509 &lt;span class="nt"&gt;-req&lt;/span&gt; &lt;span class="nt"&gt;-in&lt;/span&gt; client.csr &lt;span class="nt"&gt;-CA&lt;/span&gt; ca.crt &lt;span class="nt"&gt;-CAkey&lt;/span&gt; ca.key &lt;span class="nt"&gt;-CAcreateserial&lt;/span&gt; &lt;span class="nt"&gt;-out&lt;/span&gt; client.crt &lt;span class="nt"&gt;-days&lt;/span&gt; 365 &lt;span class="nt"&gt;-sha256&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There we have it. We have created all the certificates we need. Before moving on, there is one more thing we can do in regards to the generated client certificates and keys. We can also combine the client cert and key file into a P12 pem file, which has the benefit of only needing to pass/specify this file when making requests (instead of both the client cert and key).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a combined PEM file which will contain both the client cert and key&lt;/span&gt;
&lt;span class="nb"&gt;cat &lt;/span&gt;client/client.crt client/client.key &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; client/combined.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we are using self signed certificates, we will also need to provide the CA certificate. In most cases, where the certificates are signed by a trusted CA such as Digicert, etc. we can most likely omit this as the certificate will already be preinstalled in the system’s/browser’s trust store. &lt;/p&gt;

&lt;p&gt;Here are some examples of making this call from the client’s perspective:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://mtls.example.com &lt;span class="nt"&gt;--cert&lt;/span&gt; ./client.crt &lt;span class="nt"&gt;--key&lt;/span&gt; ./client.key &lt;span class="nt"&gt;--cacert&lt;/span&gt; ./ca.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we are utilizing the P12 pem file we can just do the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://mtls.example.com &lt;span class="nt"&gt;--cert&lt;/span&gt; ./combined.pem  &lt;span class="nt"&gt;--cacert&lt;/span&gt; ./ca.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the CA certificate is signed by a Trusted CA while using the P12 pem file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl https://mtls.example.com &lt;span class="nt"&gt;--cert&lt;/span&gt; ./combined.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Ruby, making this call is pretty straightforward. On top of this, we can make this call utilizing libraries only found within the Ruby standard library, but is also supported by several Ruby HTTP libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'net/http'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'net/https'&lt;/span&gt;
&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'uri'&lt;/span&gt;

&lt;span class="n"&gt;client_cert_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'./client.crt'&lt;/span&gt;
&lt;span class="n"&gt;client_key_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'./client.key'&lt;/span&gt;
&lt;span class="n"&gt;server_ca_cert_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'./ca.crt'&lt;/span&gt;

&lt;span class="n"&gt;server_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;URI&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'https://localhost:443/'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;client_cert&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;OpenSSL&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;X509&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Certificate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;File&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client_cert_path&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;client_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;OpenSSL&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;PKey&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;RSA&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;File&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client_key_path&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="n"&gt;http&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Net&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;HTTP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;server_url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;host&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;server_url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;port&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use_ssl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;true&lt;/span&gt;
&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cert&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client_cert&lt;/span&gt;
&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client_key&lt;/span&gt;
&lt;span class="c1"&gt;# In most cases, we can omit this as the signing CA cert is already in the system's trust store.&lt;/span&gt;
&lt;span class="c1"&gt;# However, since we are self-signing and it currently isn't in the trust store we need to provide it.&lt;/span&gt;
&lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ca_file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;server_ca_cert_path&lt;/span&gt;

&lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Net&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;HTTP&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;server_url&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;request_uri&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Response Code: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;code&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="s2"&gt;"Response Body: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;body&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the server’s point of view, especially common in Rails land with the semi-exception of Passenger, most architectures utilize a reverse proxy (such as Nginx and Apache) in front of a web server (such as Unicorn and Puma). Here we can do the mTLS validation, and for this example we will use Nginx:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;server &lt;span class="o"&gt;{&lt;/span&gt;
    listen 443 ssl&lt;span class="p"&gt;;&lt;/span&gt;

    ssl_certificate /etc/nginx/ssl/server_certs/server.crt&lt;span class="p"&gt;;&lt;/span&gt;
    ssl_certificate_key /etc/nginx/ssl/server_certs/server.key&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c"&gt;# Enable client certificate authentication&lt;/span&gt;
    ssl_client_certificate /etc/nginx/ssl/ca_certs/ca.crt&lt;span class="p"&gt;;&lt;/span&gt;
    ssl_verify_client on&lt;span class="p"&gt;;&lt;/span&gt;

    location / &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$ssl_client_verify&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; SUCCESS&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
             &lt;span class="k"&gt;return &lt;/span&gt;403&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

        proxy_pass http://web_server:port&lt;span class="p"&gt;;&lt;/span&gt;
         &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break down the parts of this which are responsible for mTLS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    ssl_client_certificate /etc/nginx/ssl/ca_certs/ca.crt&lt;span class="p"&gt;;&lt;/span&gt;
    ssl_verify_client on&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which tells Nginx to enable client certificate validation as well as which certificate file Nginx will use to validate the cert:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_verify_client" rel="noopener noreferrer"&gt;http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_verify_client&lt;/a&gt;&lt;br&gt;
&lt;a href="http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate" rel="noopener noreferrer"&gt;http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_client_certificate&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$ssl_client_verify&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; SUCCESS&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return &lt;/span&gt;403&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This block will then return a 403 response if the client certificate is unable to be validated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure:
&lt;/h2&gt;

&lt;p&gt;As mentioned in the TLDR, thankfully this section is pretty short. Ultimately, we did not need to make any changes to our infrastructure/networking/routing, except for adding the client cert and key to the various entities that could make outbound webhook requests. &lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting:
&lt;/h2&gt;

&lt;p&gt;What can go wrong, will go wrong, and as such we need troubleshooting.&lt;/p&gt;

&lt;h4&gt;
  
  
  curl
&lt;/h4&gt;

&lt;p&gt;The first tool in our arsenal (which is the easiest to use but quite powerful) is adding the -vvv flag to the curl command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-vvv&lt;/span&gt; https://mtls.example.com &lt;span class="nt"&gt;--cert&lt;/span&gt; ./combined.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flridu9fld9pjyu6fqo7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flridu9fld9pjyu6fqo7l.png" alt="curl output" width="708" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The important parts to look for are client (out) and server (in) hellos, both the certificate messages, the client and server key exchanges and finally, most importantly for mTLS, is the CERT verify from the server. Using this tool, it’s possible to figure out where in the handshake things are going errant.&lt;/p&gt;

&lt;h4&gt;
  
  
  s_client
&lt;/h4&gt;

&lt;p&gt;The second tool in our arsenal (which gives more information) is s_client. S_client will go more low level and will give more insight into the SSL/TLS process. S_client is able to decrypt and view the values of the actual SSL handshake messages as opposed to just which handshake messages are occurring with curl. &lt;/p&gt;

&lt;p&gt;Normally, these values/messages would be encrypted so if we were to do something like tcp dump all the traffic, and load it up in wireshark we wouldn’t be able to view the contents of the SSL handshake (without the session keys, which can be tricky to obtain). &lt;/p&gt;

&lt;p&gt;A good combination is to use curl to get a high level overview of the handshake process, and dive into the individual parts of the handshake as needed with s_client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl s_client &lt;span class="nt"&gt;-connect&lt;/span&gt; mtls.example.com:443 &lt;span class="nt"&gt;-key&lt;/span&gt; ./combined.pem &lt;span class="nt"&gt;-CAfile&lt;/span&gt; ./ca.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flof4diwb8r70nm4oo1rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flof4diwb8r70nm4oo1rq.png" alt="s_curl output" width="738" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For example if we look at the screenshot below, in the certificate request message (Request Cert) sent by the server, it shows the “Acceptable client certificate CA names” that it is set up to handle. Note the CN = my-ca, this is the identity name we assigned when we created the CA certificate. Therefore, we know that the server is able to handle the cert we created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;As we’ve seen, setting up mTLS is a pretty straightforward process in Ruby, and if problems arise we laid out some troubleshooting techniques that can make overcoming these hurdles easier. &lt;/p&gt;

&lt;p&gt;That’s how we feel about our APM and Observability tools here at &lt;a href="https://scoutapm.com/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;Scout&lt;/a&gt;. We help you uncover the issues quicker so you can get back to building. Talk to one of our team members today and understand how we can help you save time, energy and prevent future issues: &lt;a href="mailto:support@scoutapm.com"&gt;support@scoutapm.com&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>mtls</category>
      <category>ruby</category>
      <category>aws</category>
      <category>tls</category>
    </item>
    <item>
      <title>Benchmarking Ruby Code</title>
      <dc:creator>Matthew Chigira</dc:creator>
      <pubDate>Tue, 13 Aug 2019 02:05:44 +0000</pubDate>
      <link>https://dev.to/scoutapm/benchmarking-ruby-code-3gkc</link>
      <guid>https://dev.to/scoutapm/benchmarking-ruby-code-3gkc</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://blog.scoutapm.com/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;Scout blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One of the joys of using the Ruby language is the many different ways that you can solve the same problem, it’s a very expressive language with a rich set of libraries. But how do we know which is the best, most efficient, use of the language? When we are talking about algorithms which are critical to the performance of your application, understanding the most efficient approach to take is essential. &lt;a href="https://scoutapm.com/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;Perhaps you’ve been using Scout APM&lt;/a&gt; to hunt down issues, and now that you have found an issue, you want to optimize it. Ruby’s Benchmark module provides a very handy way for you to compare and contrast possible optimizations, and when used in conjunction with a good APM solution it will ensure that you have all bases covered. Let’s take a look at how you can get started with it today!  &lt;/p&gt;

&lt;h2&gt;
  
  
  Ruby’s Benchmarking Module
&lt;/h2&gt;

&lt;p&gt;Ruby provides a nice little module called Benchmark that is sufficient for most people's needs. It allows you to execute blocks of code and report their execution time, allowing you to see which method is fastest. All you have to do is include the Benchmark module like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'benchmark'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are several different methods you can use which we will cover shortly, but first of all, regardless of which method you choose, the benchmarking results that Ruby will give you will be provided in the form of a Benchmark::Tms object and they look like this:&lt;br&gt;
&lt;br&gt;
 &lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;user        system        total        real

5.305745    0.000026      5.305771     5.314130
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The first number shows the user’s CPU time, in other words, how long it took to execute the code, ignoring system-level calls. This is followed by the system CPU time, which shows time that was spent in the kernel, perhaps executing a system call. The third number is a total of both these values, and the last number shows the real execution time if you timed the operation from start to finish.&lt;/p&gt;
&lt;h2&gt;
  
  
  Measuring a single block of code
&lt;/h2&gt;

&lt;p&gt;If you want to measure the performance of a single block of code, and you are not interested in comparing multiple approaches, then you can use the Benchmark::measure method.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;measure&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;

  &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"lorem ipsum"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="nb"&gt;puts&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The measure method (like all the methods in this module) returns the result in the Benchmark::Tms format which you can print easily. Remember that the block of code can be anything, such as a method call etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing multiple blocks of code
&lt;/h2&gt;

&lt;p&gt;When you want to compare two or more different approaches to a problem to determine which is the best approach, then you can use Benchmark::bm method. This method is an easier-to-use version of the Benchmark::benchmark method.&lt;/p&gt;

&lt;p&gt;In the code snippet below, we define two methods with slightly different approaches: one uses a for loop and one uses the times method. By using Benchmark::bm we can call the report method, passing a value for the label and the block of code that we want benchmarked.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;old_method&lt;/span&gt;
  &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;..&lt;/span&gt;&lt;span class="mi"&gt;100_000_000&lt;/span&gt;
    &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"lorem ipsum"&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;new_method&lt;/span&gt;
  &lt;span class="mi"&gt;100_000_000&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;times&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"lorem ipsum"&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="no"&gt;Benchmark&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bm&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Old:"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;old_method&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"New:"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;new_method&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the output that is generated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;        user        system        total        real

Old:    5.305745    0.000026      5.305771     &lt;span class="o"&gt;(&lt;/span&gt;  5.314130&lt;span class="o"&gt;)&lt;/span&gt;

New:    5.084740    0.000000      5.084740     &lt;span class="o"&gt;(&lt;/span&gt;  5.092787&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In situations where the results may be unfair because of garbage collection overheads, another, similar method, Benchmark::bmbm is provided. This method benchmarks twice, displaying the output for both passes. As you can see in the example below, execution was faster on the second test.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Rehearsal &lt;span class="nt"&gt;----------------------------------------&lt;/span&gt;

Old:   5.470750   0.000000 5.470750 &lt;span class="o"&gt;(&lt;/span&gt;  5.479510&lt;span class="o"&gt;)&lt;/span&gt;

New:   5.131210   0.000000 5.131210 &lt;span class="o"&gt;(&lt;/span&gt;  5.139404&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="nt"&gt;------------------------------&lt;/span&gt; total: 10.601960sec

       user       system   total    real

Old:   5.432181   0.000000 5.432181 &lt;span class="o"&gt;(&lt;/span&gt;  5.440627&lt;span class="o"&gt;)&lt;/span&gt;

New:   5.061602   0.000000 5.061602 &lt;span class="o"&gt;(&lt;/span&gt;  5.069408&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As you can see, Ruby’s Benchmark module is easy-to-use and the insights that it can give us can be incredibly useful. Once you’ve got your feet wet with the standard Benchmark module, you can take it a step further &lt;a href="https://github.com/evanphx/benchmark-ips" rel="noopener noreferrer"&gt;with the benchmarking-ips gem&lt;/a&gt; (which shows iterations per second information) or &lt;a href="https://github.com/garybernhardt/readygo" rel="noopener noreferrer"&gt;the Readygo gem&lt;/a&gt; (which takes a more unique approach to benchmarking). Or if you’d rather just reap the rewards of somebody else’s benchmarking of Ruby’s many features, &lt;a href="https://github.com/JuanitoFatas/fast-ruby" rel="noopener noreferrer"&gt;then this page is an interesting read&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
    </item>
    <item>
      <title>Monitoring Apdex with Scout APM</title>
      <dc:creator>Matthew Chigira</dc:creator>
      <pubDate>Fri, 19 Jul 2019 07:15:39 +0000</pubDate>
      <link>https://dev.to/scoutapm/monitoring-apdex-with-scout-apm-49cl</link>
      <guid>https://dev.to/scoutapm/monitoring-apdex-with-scout-apm-49cl</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://scoutapm.com/blog/monitoring-apdex-with-scout-apm/?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;Scout blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There is no doubt that looking at response times and memory usage is essential to understanding the general health and performance of your application. But as I am sure you are aware, there is more than one way to monitor an application. Approaching monitoring from a different angle can be a powerful way of gaining new insights. If all you did was watch for high response times or areas of memory bloat, then you might overlook something far more simple: the user’s general level of satisfaction. So how can we monitor this rather broad concept of user satisfaction? Well, we can monitor this with a rather useful metric known as the Apdex score...&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Apdex?
&lt;/h2&gt;

&lt;p&gt;The &lt;em&gt;Application Performance Index&lt;/em&gt;, or &lt;strong&gt;Apdex&lt;/strong&gt;, is a measurement of a user’s general level of satisfaction when using an application. For example, if part of the system takes a long time to respond after a feature update, then the user naturally starts to become frustrated. This is what we are trying to measure when we talk about apdex, and in this scenario we would see the apdex rating fall, which would be a clear indicator that a recent change has introduced a potential issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6fno9plzgrvrzl9s5vj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6fno9plzgrvrzl9s5vj.png" width="700" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are three levels of customer satisfaction in regards to apdex (summarized in the table above): satisfied, tolerating and frustrated. The &lt;strong&gt;satisfied&lt;/strong&gt; level is when the system response time is equal to or below some arbitrary value which we know the user would be happy with. We call this arbitrary value the &lt;em&gt;Threshold&lt;/em&gt;. We can assume that a user will get &lt;strong&gt;frustrated&lt;/strong&gt; with the system when the response time goes up fourfold from this threshold. The &lt;strong&gt;tolerating&lt;/strong&gt; level, then, is when the response time falls between these two scenarios, when the user is not yet frustrated but there is a risk that they will become frustrated if the situation continues or worsens.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tsbc34cpgy1d4wzu7hf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tsbc34cpgy1d4wzu7hf.png" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we take all the requests made over a set period of time and divide them up into two groups: satisfied/tolerating users vs. frustrated users, then we come out with a ratio. This ratio is known as the &lt;strong&gt;apdex score&lt;/strong&gt;, and it can clearly show us at any given time how &lt;strong&gt;satisfied our customers are&lt;/strong&gt;. An apdex score of 1 would indicate that all of our customers are fully satisfied, whereas an apdex score of 0 on the other hand, would tell us that none of our customers are happy. The equation shown above shows how Scout calculates the apdex score, you can see that we put less weight on tolerating requests, because we assume that these users are not 100% satisfied in this situation. &lt;/p&gt;

&lt;h2&gt;
  
  
  Using Scout to measure Apdex
&lt;/h2&gt;

&lt;p&gt;The Apdex score of your application is visible throughout Scout on our main charts. You can toggle this metric on or off at anytime using the APDEX option shown in the image below. You can then try experimenting by combining the Apdex score with another metric in order to identify related patterns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F662focaogcfhfu9ylm4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F662focaogcfhfu9ylm4k.png" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to improve your Apdex score?
&lt;/h2&gt;

&lt;p&gt;In order to improve your Apdex score, you first need to be aware of it. So you will need to monitor the situation with &lt;a href="https://scoutapm.com/users/sign_up?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;an APM solution like Scout&lt;/a&gt;. Once you start monitoring the Apdex, you might find that patterns begin to emerge, such as sudden drops after particular deployments (shown by the rocket icon). Or another example might be repeated high spikes in response times during certain time periods rapidly drive down the Apdex, showing you that perhaps you have an infrastructure limitation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiz8gaopv8dmm928gj01n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiz8gaopv8dmm928gj01n.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;As you can see Apdex is an interesting way to monitor your applications. It allows you to ask questions like: “Did this feature annoy people?” and ”Have our users been happy with their experience recently?”. These sorts of questions can be hard to answer by other means, and so Apdex is definitely something that we recommend that you keep an eye on.&lt;/p&gt;

</description>
      <category>apdex</category>
      <category>apm</category>
      <category>devops</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>8 Things You Should Know About Docker Containers</title>
      <dc:creator>Matthew Chigira</dc:creator>
      <pubDate>Wed, 17 Jul 2019 06:36:28 +0000</pubDate>
      <link>https://dev.to/scoutapm/8-things-you-should-know-about-docker-containers-4bdn</link>
      <guid>https://dev.to/scoutapm/8-things-you-should-know-about-docker-containers-4bdn</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://scoutapm.com/blog/8-things-you-should-know-about-docker-containers?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;Scout blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;These days Docker is everywhere! Since this popular, open-source container tool first launched in 2013 it has gone on to revolutionize how we think about deploying our applications. But if you missed the boat with containerization and are left feeling confused about what exactly Docker is and how it can benefit you, then we’ve put together this post to help clear up any confusion you might have.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. What is Docker?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt; is a tool that allows you to easily create, run and deploy your applications in containers. These lightweight, virtual containers can be easily deployed on a server without you having to worry about the specifics of the underlying system. This gives developers great power and flexibility and allows us to avoid “dependency hell” issues when deploying in different environments. Docker is an open source technology and it is built directly on top of Linux containers, which makes it much faster and more lightweight when compared to VMs (Virtual Machines).&lt;/p&gt;

&lt;h2&gt;
  
  
  2. What is a Docker Container?
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;container&lt;/strong&gt; can best be thought of as a standard unit of software. A self-contained package, if you like. Everything that is needed to run this software, such as code, libraries, tools and dependencies, are all packaged up together. Then this neat, little package can be replicated and cloned in different environments (staging, live system, experiments etc.) but it will always run in the same way, no matter what the underlying architecture is.&lt;/p&gt;

&lt;p&gt;Now, you might think that this sounds very similar to a VM, and you’re right, it does! But there are some key differences with containers that make them particularly useful for the deployment of software applications. The key distinction between a container and a VM is that containers virtualize the &lt;em&gt;OS&lt;/em&gt;, whereas VMs virtualize the &lt;em&gt;hardware&lt;/em&gt;. This allows containers to share elements with each other, such as the underlying operating system kernel of the machine where the container is running. This gives more efficient performance, much smaller file sizes and faster deployments.&lt;/p&gt;

&lt;p&gt;So how do we create one of these containers? First we start with an image, we customize it to our needs, and then we run it. So that leads us to the next question, what exactly is a &lt;strong&gt;Docker image&lt;/strong&gt;?&lt;/p&gt;

&lt;h2&gt;
  
  
  3. What is a Docker Image?
&lt;/h2&gt;

&lt;p&gt;An &lt;strong&gt;image&lt;/strong&gt; is a shareable chunk of functionality such as a web server, or a database engine, or a Linux distribution. You can think of an image as a starting point for your containers, they are like blueprints. Each image is an untouched, fresh install of a complete environment. A container then, is a running instance of an image after it has been customized and set up.&lt;/p&gt;

&lt;p&gt;These images can be hosted and downloaded in &lt;strong&gt;Docker Hub&lt;/strong&gt; for reuse or in your own private repositories. To run an image inside a container, you need to use the Docker Engine. This running container is the complete package that we talked about earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. What is Docker Engine?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Docker Engine&lt;/strong&gt; is the name of the runtime that makes this whole system work. When people refer to Docker, they are usually talking about the Docker Engine. Containers run inside this Docker Engine. The most common way to run and manage containers is by using the Docker CLI (Command Line Interface) application. The Docker CLI communicates with the Docker Engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. What is the Docker CLI?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The Docker CLI&lt;/strong&gt; is how we communicate with the &lt;strong&gt;Docker Engine&lt;/strong&gt;, so let’s take a look at how we can create, run and delete containers using this Docker CLI.&lt;/p&gt;

&lt;p&gt;Running this command will list all the images that you have on your machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can download an image from Docker Hub with the ‘pull’ command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker pull image-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then create a container from an image and run it in a container with the ‘run’ command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker run image-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we want to see what containers exist (running or stopped) we can use ‘ps’ with the ‘-a’ flag for all:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker ps &lt;span class="nt"&gt;-a&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don’t have any images yet you can start with a simple hello world example from Docker Hub. The ‘run’ command will start a new container using the image that you specify. But if that image doesn’t exist it will try to look for it in Docker Hub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker run hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To clean up afterwards you can find out the container’s ID by running ‘docker ps -a’, and then use the ‘docker rm’ command to delete it, or with this command below you can delete all stopped containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker container prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. What is a Dockerfile?
&lt;/h2&gt;

&lt;p&gt;We’ve seen how to create and run containers from the command line, but in reality creating and running a container with many dependencies and start-up requirements can quickly become quite complex to do in a single command. Therefore, we can instead use a special file called a &lt;strong&gt;Dockerfile&lt;/strong&gt;, which we can place inside our project’s directory and share in source control. Anybody with this Dockerfile can then run the same container in exactly the same way.&lt;/p&gt;

&lt;p&gt;A Dockerfile is a file in which we can describe our Docker container. Here we can define things such as the name an image file to start from, where our application source code is located, and commands that need to be run to start our application. Using this Dockerfile, Docker has all the information that it needs to create and run our application inside a container. For a simple container, this one file is all we need to completely manage our container. So let’s take a look at what a sample Dockerfile for a Python project might look like.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start from the official Python image&lt;/span&gt;
FROM python:3

&lt;span class="c"&gt;# Make /code the working directory&lt;/span&gt;
WORKDIR /code

&lt;span class="c"&gt;# Copy everything in the working directory to the /code directory inside the container&lt;/span&gt;
COPY &lt;span class="nb"&gt;.&lt;/span&gt; /code

&lt;span class="c"&gt;# Use the Python installer to install packages defined in requirements.txt&lt;/span&gt;
RUN pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# When the container starts, run the file app.py&lt;/span&gt;
CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"python"&lt;/span&gt;, &lt;span class="s2"&gt;"app.py"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we have our Dockerfile, we can create an image from it in the current directory like this (replace image-name with your own name):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; image-name &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then we can run it as a container like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker run image-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. What is Docker Compose?
&lt;/h2&gt;

&lt;p&gt;In the real world, our applications span across multiple processes. For example, perhaps we have a web application, which sits on top of a database engine and interfaces with a REST API. How can we make that system work with containers?&lt;/p&gt;

&lt;p&gt;Well the idea is that each container should do just one task, and so each of these separate parts of our system should be in their own container. This means we need a multi-container environment. So now we need to manage how these separate containers can work together and communicate with each other, and this is where &lt;strong&gt;Docker Compose&lt;/strong&gt; comes in.&lt;/p&gt;

&lt;p&gt;To use Docker Compose we need to create a &lt;strong&gt;docker-compose.yml&lt;/strong&gt; file as well as a Dockerfile. The docker-compose.yml file ties together multiple containers which are referred to as services. In this example file, there is a “db” service and a “web” service. The “db” service uses the official PostgreSQL image and the “web” service uses an image that has been built in this current directory using a Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;version: &lt;span class="s1"&gt;'3'&lt;/span&gt;

services:
  db:
    image: postgres
  web:
    build: &lt;span class="nb"&gt;.&lt;/span&gt;
    &lt;span class="nb"&gt;command&lt;/span&gt;: python manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/code
    ports:
      - &lt;span class="s2"&gt;"8000:8000"&lt;/span&gt;
    depends_on:
      - db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is also a separate CLI application for Docker Compose, which builds on top of the standard Docker CLI. Once we have a Dockerfile and a docker-compose.yml file set up we can run all our connected containers as one like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  8. What is Docker Hub?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Docker Hub&lt;/strong&gt; is Docker’s official public repository of container images. Here you can find the official images for Linux distributions or database engines etc. which you can use in your own containers as starting points. For example, if your application uses a PostgreSQL database engine, then you can specify the official PostgreSQL image from Docker Hub in your Dockerfile to instantly use it.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
    </item>
    <item>
      <title>Building Docker Containers for our Rails Apps</title>
      <dc:creator>Matthew Chigira</dc:creator>
      <pubDate>Wed, 17 Jul 2019 06:23:53 +0000</pubDate>
      <link>https://dev.to/scoutapm/building-docker-containers-for-our-rails-apps-178k</link>
      <guid>https://dev.to/scoutapm/building-docker-containers-for-our-rails-apps-178k</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://scoutapm.com/blog/building-docker-containers-for-our-rails-apps?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;Scout blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In a recent post, we talked about &lt;a href="https://scoutapm.com/blog/8-things-you-should-know-about-docker-containers?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;the 8 things that you know about Docker containers&lt;/a&gt;, and what you should know about them. Hopefully we cleared up any confusion you might have had about the Docker ecosystem. Perhaps with all that talk, it got you thinking about trying it out on one of your own applications? Well in this post we’d like to show you how easy it is to take your existing Ruby on Rails applications and run them inside a container. So, let’s assume you have an existing Rails project with a PostgreSQL database, and let’s walk you through the steps it would take to run this in a container instead. It’s a lot easier than you probably think!&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the Dockerfile
&lt;/h2&gt;

&lt;p&gt;The first thing that we need to do to get our application to run in a container is to define our custom image that we will run as a container, and we can do this in a &lt;strong&gt;Dockerfile&lt;/strong&gt;. This Dockerfile is essentially a set of instructions for Docker to use when it builds our container image. The idea is that this file is all that is required to produce an identical container on any system, and so we can add this file to source control so that everybody in our team can utilize it. Let’s create the following file called "Dockerfile" and place it inside our project’s root directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start from the official ruby image, then update and install JS &amp;amp; DB&lt;/span&gt;
FROM ruby:2.6.2
RUN apt-get update &lt;span class="nt"&gt;-qq&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; nodejs postgresql-client

&lt;span class="c"&gt;# Create a directory for the application and use it&lt;/span&gt;
RUN &lt;span class="nb"&gt;mkdir&lt;/span&gt; /myapp
WORKDIR /myapp

&lt;span class="c"&gt;# Gemfile and lock file need to be present, they'll be overwritten immediately&lt;/span&gt;
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock

&lt;span class="c"&gt;# Install gem dependencies&lt;/span&gt;
RUN bundle &lt;span class="nb"&gt;install
&lt;/span&gt;COPY &lt;span class="nb"&gt;.&lt;/span&gt; /myapp

&lt;span class="c"&gt;# This script runs every time the container is created, necessary for rails&lt;/span&gt;
COPY entrypoint.sh /usr/bin/
RUN &lt;span class="nb"&gt;chmod&lt;/span&gt; +x /usr/bin/entrypoint.sh
ENTRYPOINT &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"entrypoint.sh"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
EXPOSE 3000

&lt;span class="c"&gt;# Start rails&lt;/span&gt;
CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"rails"&lt;/span&gt;, &lt;span class="s2"&gt;"server"&lt;/span&gt;, &lt;span class="s2"&gt;"-b"&lt;/span&gt;, &lt;span class="s2"&gt;"0.0.0.0"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s take a step-by-step look at this file to understand exactly what we are asking Docker to do. With the ‘FROM’ statement, we start from an official ruby Docker image (hosted on Docker Hub), and copy this into a brand new image. Inside this new image, we call ‘RUN’ which updates and installs a JavaScript runtime and a PostgreSQL DB client inside our new image. With the next ‘RUN’ command, we create a directory inside our image called myapp (note that you should change references of "myapp" to &lt;em&gt;your&lt;/em&gt; application’s directory name) and then we set this as the working directory of our image. The next step is to use ‘COPY’ to get our Gemfile and Gemfile.lock files from our project into this image. We need these files to be present so that we can do ‘RUN bundle install’ and install all the gems into this image. We then use ‘COPY’ to copy our entire project into the image. The next four lines are specific to Rails projects to allow them to run correctly in containers, don’t worry too much if you don’t understand this part. But we will need to add the file shown below to our projects directory and call it &lt;strong&gt;entrypoint.sh&lt;/strong&gt; for this to work. The final line of the Dockerfile, ‘CMD’, will kick off the Rails server when the image is ran in a container. &lt;/p&gt;

&lt;p&gt;This is the &lt;strong&gt;entrypoint.sh&lt;/strong&gt; file that we need to create and add to our project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# Rails-specific issue, deletes a pre-existing server, if it exists&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /myapp/tmp/pids/server.pid
&lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So using this &lt;strong&gt;Dockerfile&lt;/strong&gt; and the &lt;strong&gt;entrypoint.sh&lt;/strong&gt; script, we can build our image with a single command. But you might have noticed that we haven’t yet specified our PostgreSQL database details yet. Database engines usually run in their own containers, separate from your web application. The good news is that we don’t have to define a custom image with a Dockerfile for this database container, we can just use a standard PostgreSQL image from Docker Hub and use it as it is. But it does still mean that we will have two separate containers that need to communicate with each other. How do we do that? That’s where &lt;strong&gt;Docker Compose&lt;/strong&gt; comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the docker-compose.yml file
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Docker Compose&lt;/strong&gt; is a tool that allows us to connect multiple containers together into a multi-container environment that we can think of as a service. For example, in our situation we have a database engine container and a rails environment container. So we could use Docker Compose to combine these into a multi-container environment which we can conceptually view as our complete application. To use Docker Compose, we need to create a &lt;strong&gt;docker-compose.yml&lt;/strong&gt; file, in addition to the Dockerfile that we already have and place this in your project’s directory. This file will tie together the rails container we previously defined in the Dockerfile (let’s call that "web"), and another container for the database which we will call "db":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;version: &lt;span class="s1"&gt;'3'&lt;/span&gt;
services:
  db:
    image: postgres
    volumes:
      - ./tmp/db:/var/lib/postgresql/data
  web:
    build: &lt;span class="nb"&gt;.&lt;/span&gt;
    &lt;span class="nb"&gt;command&lt;/span&gt;: bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"rm -f tmp/pids/server.pid &amp;amp;&amp;amp; bundle exec rails s -p 3000 -b '0.0.0.0'"&lt;/span&gt;
    volumes:
      - .:/myapp
    ports:
      - &lt;span class="s2"&gt;"3000:3000"&lt;/span&gt;
    depends_on:
      - db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, for the db part, we just specify the ‘postgres’ image from Docker hub and then mount the location of our database into the container. For the web part however, we build the image defined in our Dockerfile, run some commands and mount our application code into the container. Note that if you are using a system that enforces SELinux (such as Red Hat, Fedora or CentOS), then you will need to append a special :z flag to the end of your volume paths.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting it all together
&lt;/h2&gt;

&lt;p&gt;So now that we have a &lt;strong&gt;Dockerfile&lt;/strong&gt;, &lt;strong&gt;docker-compose.yml&lt;/strong&gt; and &lt;strong&gt;entrypoint.sh&lt;/strong&gt; script in our project’s directory, there are just a few more steps we need to do before we build our image and run the application as a container.&lt;/p&gt;

&lt;p&gt;First of all, it is a good idea to clear out the contents of our &lt;strong&gt;Gemfile.lock&lt;/strong&gt; file before we proceed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;rm &lt;/span&gt;Gemfile.lock
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;touch &lt;/span&gt;Gemfile.lock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to update the database settings, as the credentials for the PostgreSQL image differ from the credentials you would use on your local install. The main difference is that here we specify our ‘db’ container as the host. You can make these changes to the relevant part of your &lt;strong&gt;config/database.rb&lt;/strong&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;default: &amp;amp;default                                                               
  adapter: postgresql                                                           
  encoding: unicode                                                             
  host: db                                                                      
  username: postgres                                                            
  password:   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to build our custom image that we defined in the Dockerfile, we can do that like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have an image for our web application, we need to prepare our database (inside the container).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose run web rake db:create
&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose run web rake db:migrate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running in a container
&lt;/h2&gt;

&lt;p&gt;And that’s it! We’re done! All you need to do now to run you entire application (this time, and in future) is this one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you are finished, to shut down correctly and remove your containers, you run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker-compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4v6dzxzlf4pax8976pc.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4v6dzxzlf4pax8976pc.gif" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rails</category>
      <category>docker</category>
      <category>ruby</category>
      <category>containers</category>
    </item>
    <item>
      <title>Container Orchestration in 2019</title>
      <dc:creator>Matthew Chigira</dc:creator>
      <pubDate>Wed, 17 Jul 2019 06:08:07 +0000</pubDate>
      <link>https://dev.to/scoutapm/container-orchestration-in-2019-5emd</link>
      <guid>https://dev.to/scoutapm/container-orchestration-in-2019-5emd</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://scoutapm.com/blog/container-orchestration-in-2019?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;Scout blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;How are you deploying your applications in 2019? Are you using containers yet? According to recent research over 80% of you are. If you are within this group, were you initially sold on the idea of containers but found that in reality, the complexity involved with this approach makes it a difficult trade-off to justify? The community is aware of this and has come up with a remedy to ease the pain, and it’s called container orchestration. So whether you are using containers or not, let’s take a closer look at &lt;em&gt;container orchestration&lt;/em&gt; and find out what you need, what its used for and who should be using it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77hftoks3686qndpophx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77hftoks3686qndpophx.png" width="774" height="553"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;©Portworx 2018 Container Adoption Survey&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Container Orchestration?
&lt;/h2&gt;

&lt;p&gt;These days the container world is dominated by Docker. In fact, most people use the terms Docker and container synonymously even though containers have been around a lot longer than Docker itself! We covered &lt;a href="https://scoutapm.com/blog/8-things-you-should-know-about-docker-containers?utm_source=dev.to&amp;amp;utm_medium=referral&amp;amp;utm_campaign=dev.to_blog"&gt;the basics of what Docker is&lt;/a&gt; in a previous blog post, and showed you &lt;a href="https://scoutapm.com/blog/building-docker-containers-for-our-rails-apps" rel="noopener noreferrer"&gt;how to get your Rails apps up and running with Docker&lt;/a&gt; in another post.&lt;/p&gt;

&lt;p&gt;Container orchestration, then, is the process of managing the complete lifecycle of containers in an automated fashion. We’re talking about deployment, provisioning of hosts, scaling, health monitoring, resource sharing and load balancing etc. All these individual tasks associated with managing containers pile up and up as your projects grows and before you know it, you’ve got quite a complex problem. Perhaps your applications is made up of many different interconnected components which each live inside a container. When you have hundreds of containers and multiple nodes, how do you safely deploy and keep everything working nicely together? That is the challenge that container orchestration solutions are trying to solve.&lt;/p&gt;

&lt;p&gt;The orchestrator sits in the middle of this complex environment and is responsible for dynamically assigning work to the nodes it has available. It automates and streamlines many essential tasks associated with containers so that developers get a seamless experience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zc6bvz7xp4d5m82scpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zc6bvz7xp4d5m82scpt.png" width="700" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The real beauty of container orchestration is that by using containers you can take care of your application’s concerns in a flexible way that is specific to your project’s needs, but then you can host them in any environment you want, such as Google Cloud Platform, Amazon Web Services, Microsoft Azure etc. You don’t have to feel constrained by an all-in-one PaaS solution which doesn’t fit your companies individual needs. You are essentially creating your own custom platform and then pushing it out to a hosting solution. Pretty interesting, right? So let’s take a look at the most popular container orchestration tools around today and see what they can do for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes vs. Docker Swarm vs. Apache Mesos
&lt;/h2&gt;

&lt;p&gt;The three major container orchestration solutions in 2019 are: the popular open-source Kubernetes platform from Google, Docker’s own system called Docker Swarm, and Mesos by Apache.&lt;/p&gt;

&lt;p&gt;Which one is the best and what is the difference? Well, the short answer is that Kubernetes is probably the one to go with for most scenarios, and it’s popularity does not seem to be halting any time soon. Docker Swarm, on the other hand, is the easiest to get started with. Whereas Apache Mesos offers the most flexibility and is a good choice if you have a complex mix of legacy systems and containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6molunmpkd43adj8bgx2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6molunmpkd43adj8bgx2.png" width="700" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gvye45yapqbnsq5szv5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gvye45yapqbnsq5szv5.png" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes is by far the most popular of these three technologies, due to it’s fantastic feature set, it’s backing from Google and the fact that it is open-source. But with great power comes great complexity, which means that the learning curve is steep for newcomers. &lt;/p&gt;

&lt;p&gt;Kubernetes builds on top of the Docker ecosystem in a seamless way, so that developers can maintain their Dockerfiles in source control and leave the kubernetes logistics to the DevOps team to handle. Let’s take a look at some of the terminology that kubernetes uses which I think will give you a good sense of the design philosophy.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;node&lt;/strong&gt; is a physical server or a virtual machine which your kubernetes system can use. A &lt;strong&gt;cluster&lt;/strong&gt; then, is a collection of these nodes which make up your entire system; these nodes are the resources that kubernetes will automatically manage as it sees fit.&lt;/p&gt;

&lt;p&gt;One or more Docker containers can be grouped into what is known as a &lt;strong&gt;pod&lt;/strong&gt;, which is managed collectively, in other words, these are started and stopped together as if they are one application.&lt;/p&gt;

&lt;p&gt;Then finally we have &lt;strong&gt;services&lt;/strong&gt;, which are conceptually different from pods and nodes (which can go up and down). Instead a service represents something customer-facing which should always be running, regardless of which resource kubernetes is physically providing.&lt;/p&gt;

&lt;p&gt;Kubernetes is going from strength-to-strength in the container world and is clearly the most popular container orchestration tool out there. It’s tight integration with popular cloud services and its high user base make it a solid choice and investment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Docker Swarm
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqweskr3b4pjgn00opl1l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqweskr3b4pjgn00opl1l.png" width="512" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the complexity of Kubernetes puts you off, or you feel like you won’t leverage all of its features, then maybe Docker Swarm would be more suitable for your project. It’s conceptually a lot simpler. Docker Swarm is a separate product from the Docker team that builds on top of Docker and provides container orchestration.&lt;/p&gt;

&lt;p&gt;There are some different terms used in Docker Swarm which have similar meanings to Kubernetes terms. For example, the alternative term for Kubernetes’ cluster is a &lt;strong&gt;swarm&lt;/strong&gt;. This is a collection of &lt;strong&gt;nodes&lt;/strong&gt;, which can be broken down in &lt;em&gt;manager nodes&lt;/em&gt; and &lt;em&gt;worker nodes&lt;/em&gt; in Docker Swarm. You can think of &lt;strong&gt;services&lt;/strong&gt; like individual components of your application and &lt;strong&gt;tasks&lt;/strong&gt; as the Docker containers that provide a solution to a given service.&lt;/p&gt;

&lt;p&gt;It is unclear what the future holds for Docker Swarm, as Docker seems to have fully embraced Kubernetes, but this could still be a viable option for you if you don’t require the full feature set of Kubernetes and you are looking to get up and running quicker.&lt;/p&gt;

&lt;h3&gt;
  
  
  Apache Mesos
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwp0ufs3yye9mrsk1sgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwp0ufs3yye9mrsk1sgo.png" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final technology that I want to talk about is Apache Mesos, which is a little bit of a different beast from the other two solutions I’ve presented. It’s been around longer and it wasn’t created strictly to manage Docker containers like Kubernetes and Docker Swarm were. Instead Mesos can be described more generally as a &lt;strong&gt;cluster management tool&lt;/strong&gt; which manages a cluster of physical (or virtual) nodes and assigns &lt;strong&gt;workloads&lt;/strong&gt; to them. But these underlying workloads that it assigns to nodes can be anything. They are not just limited to Docker containers. Containers are just one example of a workload that Mesos can manage. In fact, to use Docker containers with Apache Mesos, you actually need to use an add-on called &lt;strong&gt;Marathon&lt;/strong&gt;, you don’t get this feature out of the box.&lt;/p&gt;

&lt;p&gt;Mesos is definitely complex but it has some great advantages. It’s much more suitable if you are not completely tied into a containerized environment and perhaps have legacy systems and integrated services that you want to manage together with containers.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>kubernetes</category>
      <category>containers</category>
    </item>
    <item>
      <title>Debugging with Rails Logger</title>
      <dc:creator>Matthew Chigira</dc:creator>
      <pubDate>Wed, 17 Jul 2019 05:40:05 +0000</pubDate>
      <link>https://dev.to/scoutapm/debugging-with-rails-logger-1739</link>
      <guid>https://dev.to/scoutapm/debugging-with-rails-logger-1739</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://scoutapm.com/blog/debugging-with-rails-logger" rel="noopener noreferrer"&gt;Scout blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you’re a Rails developer, then you’ve probably used Rails Logger on at least one occasion or another. Or maybe you have used it without even realizing, like when you run ‘rails server’ and it prints information to the terminal window, for example. Rails Logger provides us with a powerful way of debugging our applications and gives us an insight into understanding errors when they occur. But are you using all of the Rails Logger features? There’s a good chance you are not! So let’s take a more in-depth look at the logging system in Rails, look at some of its more unknown features, and establish some best practices for our log creation in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Rails Logger?
&lt;/h2&gt;

&lt;p&gt;The Rails Logger is part of the the &lt;strong&gt;ActiveSupport&lt;/strong&gt; library, which is a utility library and set of extensions to the core Ruby language. It builds on top of Ruby’s &lt;strong&gt;Logger&lt;/strong&gt; class and provides us with a simple mechanism for logging events that occur throughout our application’s lifetime. Logging enables us to record information about our application during runtime, and store it persistently for later analysis and debugging.&lt;/p&gt;

&lt;p&gt;Rails is configured to create separate log files for each of the three default environments: development, test and production. By default it puts these log files in the &lt;strong&gt;log/&lt;/strong&gt; directory of your project. So if you open up that folder you’ll see a &lt;strong&gt;development.log&lt;/strong&gt; and &lt;strong&gt;test.log&lt;/strong&gt; file, and perhaps a &lt;strong&gt;production.log&lt;/strong&gt; file too, depending on how your project is set up.&lt;/p&gt;

&lt;p&gt;Using the Rails Logger is as simple as adding a line of code like the one shown below to one of your Controllers, Models or Mailers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt; &lt;span class="s2"&gt;"User created: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="vi"&gt;@user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;inspect&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we use the &lt;strong&gt;debug&lt;/strong&gt; method of the globally accessible &lt;strong&gt;logger&lt;/strong&gt; to write a message with an object’s details to the log. In our development environment, this prints our log message to the terminal as well as to the &lt;strong&gt;development.log&lt;/strong&gt; file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgweyjf2tjakqtyrxh5wq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgweyjf2tjakqtyrxh5wq.png" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The debug level which we specified is just one of six possible levels that we can use, and there is a corresponding method to call for each of these levels. By having a level system for our logs, it allows us to group logs together and then selectively choose which levels get reported and which do not. For example we might not pay much attention to low-level debugging information in our production environment, instead we would want to hear about errors and warnings that are occurring. Let’s take a look at each of these levels and see what they can be used for:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhiig7059i6crk18vagmo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhiig7059i6crk18vagmo.png" width="700" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Customizing our logs
&lt;/h2&gt;

&lt;p&gt;Perhaps that’s as far as most Rails developers go with Rails Logger; just logging the occasional debug messages while they develop. But there is much more that we can do with this powerful tool. Let’s take a look at how we would customize the logging system to work differently from it’s default settings. &lt;/p&gt;

&lt;p&gt;We can customize our logger settings either collectively in the main &lt;strong&gt;application.rb&lt;/strong&gt; file, or individually for each environment in each of the &lt;strong&gt;config/development.rb&lt;/strong&gt;, &lt;strong&gt;config/test.rb&lt;/strong&gt; and &lt;strong&gt;config/production.rb&lt;/strong&gt; files. Here we can do things like change the logging level that gets reported, define different locations for our logs, or even write to our logs in different formats that we can define ourselves.&lt;/p&gt;

&lt;h3&gt;
  
  
  Changing the log level
&lt;/h3&gt;

&lt;p&gt;If we wanted to prevent developer-specific log messages from filling up our logs in our production environment, we could instead set the logging level to &lt;strong&gt;:error&lt;/strong&gt;. This would show only log messages from the error level down. In other words just error and fatal messages would be reported.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config/environments/production.rb&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log_level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="ss"&gt;:error&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Outside of these environment initializer files, you can also just temporarily change the log level dynamically, anywhere in your code, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# From anywhere, you can specify a value from 0 to 5&lt;/span&gt;
&lt;span class="no"&gt;Rails&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;level&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This might be useful if you want to turn on a verbose-level of logging around a certain task, and then quickly turn it off again.&lt;/p&gt;

&lt;h3&gt;
  
  
  Changing the log location
&lt;/h3&gt;

&lt;p&gt;To change the location of where our log file gets saved to somewhere other than &lt;strong&gt;log/&lt;/strong&gt;, we can define a new logger and specify our own path like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config/environments/development.rb&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"/path/to/file.log"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Changing the log format
&lt;/h3&gt;

&lt;p&gt;For even more flexibility, we can take this a step further by overriding the Rails’ formatter class with our own custom class. This allows us to fully define how our logs look, and if we wanted to, we could also add our own complex logic to determine what gets logged and how it gets logged.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CustomFormatter&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ActiveSupport&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;SimpleFormatter&lt;/span&gt;                  
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;severity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;progname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;                                       
    &lt;span class="s2"&gt;"[Level] &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;severity&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;                                                  
    &lt;span class="s2"&gt;"[Time] &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;                                                       
    &lt;span class="s2"&gt;"[Message] &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; &lt;/span&gt;&lt;span class="se"&gt;\n\n\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;                                                   
  &lt;span class="k"&gt;end&lt;/span&gt;                                                                           
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After putting that file in our &lt;strong&gt;lib/&lt;/strong&gt; folder, we can tell Rails Logger to use like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config/environments/development.rb&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log_formatter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;CustomFormatter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, even this small, simple change makes the log much more readable:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp818a1xzr7jcbvce0sgk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp818a1xzr7jcbvce0sgk.png" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another good use case for writing your own formatting class might be if you wanted to output the logs in JSON format so that you can integrate your logs with other systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tagged Logging
&lt;/h3&gt;

&lt;p&gt;Another powerful feature of Rails Logger is the ability to tag our log entries. This could really useful if your application runs across multiple subdomains for example. In this scenario, by adding a subdomain tag you would be able to clearly separate log entries for the different subdomains you are using. Another example is that you could add a request ID tag, this would be very useful when debugging so that you could isolate all log entries for a given request.&lt;/p&gt;

&lt;p&gt;To enable tagged logging, you need to create a new logger of type &lt;strong&gt;TaggedLogging&lt;/strong&gt; and assign it to &lt;strong&gt;config.logger&lt;/strong&gt; in your config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config/environments/development.rb&lt;/span&gt;
&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;logger&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ActiveSupport&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;TaggedLogging&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;Logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;STDOUT&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you’ve done this you can put your logs in blocks of code following a call to the &lt;strong&gt;tagged&lt;/strong&gt; method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# This will log: [my-subdomain] User created: ...&lt;/span&gt;
&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tagged&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"my-subdomain"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;debug&lt;/span&gt; &lt;span class="s2"&gt;"User created: &lt;/span&gt;&lt;span class="si"&gt;#{&lt;/span&gt;&lt;span class="vi"&gt;@user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;inspect&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; }"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What’s next?
&lt;/h2&gt;

&lt;p&gt;Rails Logger is a very flexible tool that should be a part of every Rails developers development process, and as we’ve seen, by making just a couple of small tweaks we can be even more productive and efficient in our debugging process. Hopefully we’ve given you some good ideas about how you can start to do that.&lt;/p&gt;

</description>
      <category>rails</category>
      <category>ruby</category>
      <category>logger</category>
      <category>debugging</category>
    </item>
    <item>
      <title>Understanding Heroku Error Codes with Scout APM</title>
      <dc:creator>Matthew Chigira</dc:creator>
      <pubDate>Wed, 17 Jul 2019 03:58:21 +0000</pubDate>
      <link>https://dev.to/scoutapm/understanding-heroku-error-codes-with-scout-apm-306c</link>
      <guid>https://dev.to/scoutapm/understanding-heroku-error-codes-with-scout-apm-306c</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://scoutapm.com/blog/understanding-heroku-error-codes-with-scout-apm" rel="noopener noreferrer"&gt;Scout blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you are hosting your application with Heroku, and find yourself faced with an unexplained error in your live system. What would you do next? Perhaps you don’t have a dedicated DevOps team, so where would you start your investigation? With Scout APM of course! We are going to show you how you can you use Scout to find out exactly where the problem lies within your application code. We are going to walk through two of the most common Heroku error codes and show you how to diagnose the problem with Scout quickly and efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Heroku’s Error Logging System
&lt;/h2&gt;

&lt;p&gt;First of all let’s take a look at how a typical error occurs for Heroku users and then we will look at how we can use Scout to debug the problem. Any errors that occur in Heroku result in an &lt;strong&gt;Application error page&lt;/strong&gt; being displayed to the user with a HTTP 503 status code indicating “service unavailable”. You have probably seen this page many times before if you are a frequent Heroku user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kt4gdy995ypywwv2h71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kt4gdy995ypywwv2h71.png" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a developer, this doesn’t give us much to go on unfortunately. But we can see the specifics of the error, by digging into Heroku’s logging system. We can view these logs on the &lt;strong&gt;Activity&lt;/strong&gt; tab of the &lt;strong&gt;Heroku Dashboard&lt;/strong&gt; or by using the Heroku CLI application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;heroku logs &lt;span class="nt"&gt;--tail&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Heroku uses the letters &lt;strong&gt;H&lt;/strong&gt;, &lt;strong&gt;R&lt;/strong&gt; and &lt;strong&gt;L&lt;/strong&gt; in its error codes to differentiate between &lt;strong&gt;HTTP&lt;/strong&gt; errors, &lt;strong&gt;runtime&lt;/strong&gt; errors, and &lt;strong&gt;logging&lt;/strong&gt; errors respectively. A full list of all the different types of errors that Heroku reports can be found &lt;a href="https://devcenter.heroku.com/articles/error-codes" rel="noopener noreferrer"&gt;here&lt;/a&gt;. As you can see from the screenshot below, the error being shown by Heroku in this case is a H12 error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgjlkl5u19kbrcbyel7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgjlkl5u19kbrcbyel7u.png" width="800" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  From Error Code to Solution, with Scout
&lt;/h2&gt;

&lt;p&gt;So now that we’ve checked the logs and found out what type of error occurred, we can cross reference this with the Heroku’s documentation and just fix it, right? Well, not really. You see, these error descriptions are a little vague, don’t you think? They clearly tell us what happened but they don’t really tell us why they happened. This is the job of an Application Performance Monitoring (APM) tool like Scout. When you need to know why and where, Scout should be your go-to tool. &lt;/p&gt;

&lt;p&gt;Fortunately, the Heroku logs do tell us when the error occurred and what type of error occurred, so now let's jump into Scout APM and investigate what happened at that time of the error using two common example scenarios.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 1 - H12: Request timeout
&lt;/h3&gt;

&lt;p&gt;Heroku throws this error when a request takes longer than 30 seconds to complete. In Scout, any request that takes longer than 30 seconds to complete should show up very clearly on the main overview page as a spike in the chart, because this type of response time would be dramatically different from your usual traffic. So that would be the first place to start your investigation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy15fw8a3u01eam7vetbx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy15fw8a3u01eam7vetbx.png" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have found the relevant spike on the chart, drag and drop a box around it, and a handy list of endpoints that occurred within that spike will be presented to you. Pick the endpoint you think is the culprit and then pick a trace on that endpoint which occurred at the &lt;strong&gt;same time&lt;/strong&gt; of the error.&lt;/p&gt;

&lt;p&gt;At this point, we can examine the trace and see a breakdown of where time or memory was spent during this trace, organized by layer. For example, we might see that a large portion of time was spent on a database query that originated from a line of code in one or our models, and if that is the case then we can see a backtrace of that query and the line of application code. &lt;/p&gt;

&lt;p&gt;But in this specific example below, we can see that a large proportion of the time is actually being spent in the Edit action of the Articles controller. In fact, we can see that it took 40 seconds to process this controller action, and this is the reason that Heroku timed out with a H12 error code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqt7w559yyh3u0ouyffz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqt7w559yyh3u0ouyffz.png" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can debug this particular example even further &lt;a href="https://github.com/scoutapp/scout_apm_ruby/pull/247" rel="noopener noreferrer"&gt;by enabling the AutoInstrument feature&lt;/a&gt;, which further &lt;strong&gt;breaks down uninstrumented controller time&lt;/strong&gt;, line-by-line. Now we can see an exact line of code within our controller and a backtrace which shows where our problem lies. Here we can see that the reason that this application ran for 40 seconds was because of a call to sleep(40) on line 22 of the edit action. Now this line of code was obviously put there just for demonstrative purposes, but it gives you an idea of the level of detail that you can get to when given very little information to go on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcpc0qlos96s9rhohskw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcpc0qlos96s9rhohskw.png" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Other ideas for H12 errors
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Have you ever experienced somebody telling you that they have had a timeout error on an endpoint but after accessing the endpoint yourself, everything looks fine? Perhaps you keep seeing this error message intermittently, at certain times of the day or only from certain users, but no matter what you do, you just can’t recreate the problem yourself. Maybe Scout’s &lt;strong&gt;traces by context&lt;/strong&gt; can help you diagnose here.&lt;/li&gt;
&lt;li&gt;The Trace index page allows you to view traces by different criteria, and even by custom context criteria that you can define yourself. For example, you could view all the traces associated with a particular user, or maybe even all users on large pricing plans. Once you have found the offending trace, you can follow similar steps to what we described earlier to get to the root cause.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example 2 - R14: Memory quota exceeded
&lt;/h3&gt;

&lt;p&gt;Probably the most common Heroku error code you will come across is the &lt;strong&gt;R14 “Memory quota exceeded”&lt;/strong&gt; error. This error (and it’s older brother &lt;strong&gt;R15 “Memory quota vastly exceeded”&lt;/strong&gt;) occur when your application exceeds its server’s allocated amount of memory. The amount of memory your server is allocated depends on which Heroku plan you are on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6n7mrjtm003b2r4411l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6n7mrjtm003b2r4411l.png" width="800" height="45"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When your application runs out of memory and the R14 error occurs, the server will start to page to swap space, and you will start to see a performance slowdown across your entire application. If it continues to increase and reaches 2x your quota then an R15 will occur and your server will be shutdown.&lt;/p&gt;

&lt;p&gt;This is a great example of an error message that you might come across but are unable to debug without an APM tool. But fear not because &lt;strong&gt;analysing memory bloat&lt;/strong&gt; is an area where Scout really shines in comparison to its competitors.&lt;/p&gt;

&lt;p&gt;The first place that you can look for memory anomalies is on the main overview page of Scout, you can choose to show memory as a sparkline on the graph. In our example shown below, you can see how the memory usage shot upwards rapidly at a certain point in time. Furthermore, our &lt;strong&gt;Memory Bloat Insights&lt;/strong&gt; feature (towards the bottom of the screenshot) identified ArticlesController#index as a potential issue to investigate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8yha7qxea4p2eqdp6no.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8yha7qxea4p2eqdp6no.png" width="800" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we take a look at the trace that occurred at this time (shown below), we can clearly see which layers are using all this memory (the &lt;strong&gt;View&lt;/strong&gt; layer). Also we can determine from the backtrace that the cause is that there is no filter or pagination on the query, and so all records are being loaded onto a single page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy6lnk0zs1mitrlrwcr6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy6lnk0zs1mitrlrwcr6.png" width="800" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Other ideas for R14 errors
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;A great example of a hard-to-trackdown memory issue is when only large users are generating R14 errors. In this case, viewing by context on the Traces page will help you find this issue.&lt;/li&gt;
&lt;li&gt;Looking at the Web Endpoints index page, you can filter by Max Allocations to find endpoints using a lot of memory. This gives you a different approach to finding endpoints to investigate rather than using the main overview page’s timeline.&lt;/li&gt;
&lt;li&gt;If you want to read more about the causes of memory bloat, and how you can use Scout to fix memory bloat issues, &lt;a href="https://book.scoutapm.com/memory-bloat.html" rel="noopener noreferrer"&gt;take a look at this article&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What’s next?
&lt;/h2&gt;

&lt;p&gt;Are you using Heroku and coming across a different sort of Heroku error that we didn’t cover here? Contact us today on &lt;a href="//mailto:support@scoutapm.com"&gt;support&lt;/a&gt; and we’ll show you how to debug the problem with Scout APM!&lt;/p&gt;

&lt;p&gt;Heroku is a fantastic platform for hosting your applications, that’s why it is so popular amongst developers. But it is also clear to see how useful an APM tool is for debugging critical production issues when they arise and overall application performance health monitoring. So definitely &lt;a href="https://scoutapm.com/users/sign_up" rel="noopener noreferrer"&gt;sign up for a free trial today&lt;/a&gt; if you are not currently using Scout!&lt;/p&gt;

</description>
      <category>heroku</category>
      <category>apm</category>
      <category>debugging</category>
      <category>rails</category>
    </item>
    <item>
      <title>2019 PHP Monitoring Options</title>
      <dc:creator>Matthew Chigira</dc:creator>
      <pubDate>Wed, 17 Jul 2019 03:30:26 +0000</pubDate>
      <link>https://dev.to/scoutapm/2019-php-monitoring-options-2c76</link>
      <guid>https://dev.to/scoutapm/2019-php-monitoring-options-2c76</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://scoutapm.com/blog/2019-php-monitoring-options" rel="noopener noreferrer"&gt;Scout blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There is no denying the popularity of PHP. It has been a constant force in the web development world since its release way back in 1995. And now in 2019, thanks to Laravel, it is still going as strong as ever! Here at Scout, recently we have been working hard on providing a &lt;strong&gt;PHP performance monitoring agent&lt;/strong&gt; to sit alongside our existing &lt;a href="https://docs.scoutapm.com/#ruby-agent" rel="noopener noreferrer"&gt;ruby&lt;/a&gt;, &lt;a href="https://docs.scoutapm.com/#python-agent" rel="noopener noreferrer"&gt;python&lt;/a&gt; and &lt;a href="https://docs.scoutapm.com/#elixir-agent" rel="noopener noreferrer"&gt;elixir&lt;/a&gt; agents. Prior to us releasing this PHP agent, let’s take a look at the PHP ecosystem to see how Scout can complement the existing monitoring landscape.&lt;/p&gt;

&lt;h2&gt;
  
  
  Categorizing Monitoring Tools
&lt;/h2&gt;

&lt;p&gt;There are many different ways to address the challenge of how to successfully monitor your PHP applications. You can approach this task from a multitude of different perspectives, depending on what it is you want to see. But we can roughly divide these approaches into two broad categories: &lt;em&gt;blackbox monitoring&lt;/em&gt; and &lt;em&gt;whitebox monitoring&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjbahfzzqk8beje8baxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkjbahfzzqk8beje8baxp.png" width="700" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blackbox monitoring&lt;/strong&gt; is concerned with monitoring a system from the outside; from a user-oriented perspective if you will. It is called blackbox because this type of monitoring has no information about the underlying system, just what is exposed to the user on the outside. Examples of blackbox monitoring include uptime monitoring or polling.&lt;/p&gt;

&lt;p&gt;In contrast to this approach, &lt;strong&gt;whitebox monitoring&lt;/strong&gt; describes approaches that aim to monitor a system from the inside, with privileged information. This can be achieved with logging, metrics and tracing etc. Typically these approaches come with an overhead but they allow us to glimpse inside a system, to the heart of the problem.&lt;/p&gt;

&lt;p&gt;Let’s take a closer look at the four most popular monitoring techniques in the PHP ecosystem: uptime monitoring, logging, metrics and tracing.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Uptime Monitoring
&lt;/h2&gt;

&lt;p&gt;Perhaps the simplest type of monitoring that we can do is &lt;strong&gt;uptime monitoring&lt;/strong&gt;. This blackbox approach allows us to monitor the uptime of our websites and to receive alerts the instant a disruption occurs. Some of the most popular uptime monitoring services around in 2019 are &lt;a href="https://www.pingdom.com/product/uptime-monitoring" rel="noopener noreferrer"&gt;Pingdom&lt;/a&gt;, &lt;a href="https://uptimerobot.com/" rel="noopener noreferrer"&gt;UptimeRobot&lt;/a&gt; and &lt;a href="https://www.site24x7.com/" rel="noopener noreferrer"&gt;Site24x7&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Uptime monitoring services are very useful and you can think of them as being on the frontline, so to speak. In this sense, they will alert you that a problem occurred, but in order to diagnose that problem, you will probably need a different type of monitoring solution which provides more specific information.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Logging
&lt;/h2&gt;

&lt;p&gt;For more information, the next logical step from uptime monitoring, is &lt;strong&gt;logging&lt;/strong&gt;. As developers we are constantly logging information in our applications, either directly or indirectly by the framework we are using. These logs report everything from routine status information to exceptions and error reports. This means that when something does go wrong, there is a wealth of privileged information available with which we can investigate. As we mentioned earlier, this is known as whitebox monitoring.&lt;/p&gt;

&lt;p&gt;Logging to files works fine initially, but as your application grows, the need for a log monitoring service becomes apparent. For example, when an error occurs in your live system, you want to be notified about that and then immediately be able to see the stack trace so that the debugging process can start right away. &lt;/p&gt;

&lt;p&gt;There are many great services in the PHP space for this task, here are some of our favourites:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rollbar.com/" rel="noopener noreferrer"&gt;Rollbar&lt;/a&gt; (integrates with Scout)&lt;br&gt;
&lt;a href="https://sentry.io/" rel="noopener noreferrer"&gt;Sentry&lt;/a&gt; (integrates with Scout)&lt;br&gt;
&lt;a href="https://www.bugsnag.com/" rel="noopener noreferrer"&gt;Bugsnag&lt;/a&gt; (integrates with Scout)&lt;br&gt;
&lt;a href="https://www.honeybadger.io/" rel="noopener noreferrer"&gt;Honeybadger&lt;/a&gt; (integrates with Scout)&lt;/p&gt;

&lt;p&gt;Error logging systems have another great advantage in that they can also be integrated very easily into APMs like Scout for more savvy, efficient monitoring. In the image below, you can see how the errors logged from Sentry show up on the main overpage of Scout in a handy list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwf4ker2k9xv6wvbcqhm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwf4ker2k9xv6wvbcqhm.png" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Metrics
&lt;/h2&gt;

&lt;p&gt;Logging can be really great for reporting detailed information, which can be easily filtered and used for diagnosing problems. But what about if you need a higher-level of information? In other words, what if you need to zoom out and see the bigger picture of your system’s performance? That’s where &lt;strong&gt;metrics&lt;/strong&gt; come in.&lt;/p&gt;

&lt;p&gt;Metrics are a measurement of some variable over a period of time (called a time series), rather than a single snapshot of an occurrence like a log. Metrics complement logs, and when used together they offer a powerful monitoring solution. Three of the most popular services that we hear about often are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://graphiteapp.org/" rel="noopener noreferrer"&gt;Graphite&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.influxdata.com/" rel="noopener noreferrer"&gt;InfluxDB&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.hostedgraphite.com/" rel="noopener noreferrer"&gt;Hosted Graphite&lt;/a&gt; is an alternative to ‘standard Graphite’ (shown below), and it’s a service that we’ve heard some great things about recently, as it seems to be gaining a lot of popularity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbnxedna4fvas6acr1c5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbnxedna4fvas6acr1c5.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These solutions are often paired together and visualized in tools like &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;. If you are interested in learning more about Prometheus and Grafana, then checkout Erik’s fantastic blog post about &lt;a href="https://scoutapm.com/blog/prometheus-and-docker-monitoring-your-environment" rel="noopener noreferrer"&gt;how to set up Prometheus with Grafana inside a Docker container&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Tracing
&lt;/h2&gt;

&lt;p&gt;So far we have talked about various types of monitoring solutions which allow you to be notified if a service is down, present error logs for diagnosing problems, and show you useful charts about your system’s performance. But how about the scenario where there is no error in your system, the services are all up and running, but you are getting indications that the performance of your application in certain areas is not as you had hoped. That’s where the final piece of the monitoring puzzle comes in, &lt;strong&gt;tracing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq802n3t67e9j01rq5azt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq802n3t67e9j01rq5azt.png" width="800" height="669"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tracing allows you to see a complete picture of the lifecycle of a single PHP Laravel request from start to finish. Enabling you to see how much memory is being used at each layer of the request, and exactly where time is being spent. These traces can then be aggregated together to form part of a high-level metric which allows you to view the performance and health of individual endpoints in your system. This powerful snapshot of an application's performance over time is the job of an &lt;strong&gt;Application Performance Monitoring&lt;/strong&gt; tool, or APM, such as Scout.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next?
&lt;/h2&gt;

&lt;p&gt;PHP developers rejoice! It’s no longer just the Ruby, Python and Elixir developers who get to have all the fun, you too will soon be able to use your favourite Application Performance Monitoring (APM) tool, Scout, to monitor your PHP and Laravel applications!&lt;/p&gt;

&lt;p&gt;If you are interested in using Scout to monitor your PHP applications, then please get in touch today to &lt;a href="//mailto:support@scoutapm.com"&gt;register your interest&lt;/a&gt;. We would love to hear from you about your requirements and help you get started. If you are not currently a Scout customer, then make sure you are ready by &lt;a href="https://scoutapm.com/users/sign_up" rel="noopener noreferrer"&gt;signing up for a free trial today&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>php</category>
      <category>monitoring</category>
      <category>apm</category>
      <category>laravel</category>
    </item>
    <item>
      <title>Continuous Deployment Tools</title>
      <dc:creator>Matthew Chigira</dc:creator>
      <pubDate>Wed, 17 Jul 2019 03:07:12 +0000</pubDate>
      <link>https://dev.to/scoutapm/continuous-deployment-tools-10mn</link>
      <guid>https://dev.to/scoutapm/continuous-deployment-tools-10mn</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://scoutapm.com/blog/continuous-deployment-tools" rel="noopener noreferrer"&gt;Scout blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Software development has changed rapidly over the last ten years. Many companies have moved away from the traditional waterfall development model to an agile methodology, and this has meant embracing continuous integration and continuous delivery practices. But how about taking it one step further with continuous deployment? Are you deploying to production automatically, without any human intervention? Some of the major products we rely on everyday are. We take a look at some of the best continuous deployment tools and put them head-to-head. &lt;/p&gt;

&lt;h2&gt;
  
  
  CD, CI &amp;amp; CD: What’s the difference?
&lt;/h2&gt;

&lt;p&gt;Before we get started with discussing continuous deployment tools, let’s take a moment to clear up any confusion that you might have in regards to these three similar and related acronyms: CI, CD and, well, CD again!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous integration (CI)&lt;/strong&gt; encourages all developers to frequently check in their code changes into a single master branch. It aims to avoid the challenges that large teams face when  trying to integrate code when working across multiple versions and branches. The idea is that when a developer opens a pull request, an automated system can trigger a build process that runs all the system’s tests and then deploys a production-like system for inspection. The idea is that these systems are as similar to production as possible and developers avoid wasting time on an ‘integration hell’ situation when they try to merge too much code at once.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuous delivery (CD)&lt;/strong&gt; is a software development methodology born out of the agile software development movement. The aim of continuous delivery is that your software is always in a deployable state, ready to be released at any given moment. Small, simple releases are pushed out to the user rapidly, feedback on these changes is received, and then the cycle repeats. This is in stark contrast to the traditional waterfall model of software development where the development flows from one team to the next in a slow, gradual fashion. With continuous delivery, automation is used to streamline this process, but deployment of the production system is done &lt;strong&gt;manually&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyctu1ihlx66lrg0ilx3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyctu1ihlx66lrg0ilx3y.png" width="700" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, &lt;strong&gt;continuous deployment (also CD!!)&lt;/strong&gt; then, is another way for teams to achieve this constant continuous delivery. It takes the process one step further by &lt;strong&gt;automatically&lt;/strong&gt; deploying to production when a certain step, such as a merge to the master branch, occurs. This all occurs without any human interaction. Many teams use continuous delivery and continuous integration, but fall short of adopting continuous deployment, instead preferring to push the deploy button manually. But that too is changing, as these days many companies are going that extra step further and using their customers as the testers for there newest features by using continuous deployment in their workflow!&lt;/p&gt;

&lt;p&gt;Let’s round-up the most popular continuous deployment tools around today and see what they can offer you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins
&lt;/h2&gt;

&lt;p&gt;Let’s start with the most popular CI/CD tool on the market, Jenkins. Jenkins styles itself as an open-source continuous integration server with over 1000 plugins which allow it to be tailored to many different use cases. For example, if you wanted to embrace the continuous deployment methodology, then one popular route is the Blue Ocean plugin.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12wfun93eprvm8vn7lpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12wfun93eprvm8vn7lpt.png" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Being free and open-source, Jenkins is a great choice if you are concerned about the cost of some of the other solutions; plus it’s very widely used and so there are a plethora of plugins and documentation available.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bamboo (Atlassian)
&lt;/h2&gt;

&lt;p&gt;Bamboo is a continuous integration and continuous deployment server solution from Atlassian, the company behind Bitbucket (source control management) and Jira (issue tracking). If you are already using Bitbucket and Jira, then the integration with Bamboo would be very convenient. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5jiv9jv8c9fxdl41455.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5jiv9jv8c9fxdl41455.png" width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Prices start from $10 per month for a basic plan, but the cheapest professional-grade plan starts at $1,100 per month and scales upwards depending on how many build agents your require.&lt;/p&gt;

&lt;h2&gt;
  
  
  TeamCity (Jetbrains)
&lt;/h2&gt;

&lt;p&gt;TeamCity is the continuous integration and continuous deployment solution offered by JetBrains, the popular IDE developer. It has a slick and easy-to-use interface and is packed full of useful features.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqxckgi7phj8yvl5cyom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbqxckgi7phj8yvl5cyom.png" width="600" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A free “Professional Server Licence” including 3 build agents might be sufficient for small teams. After that, enterprise licences start from $1,999. &lt;/p&gt;

&lt;h2&gt;
  
  
  Octopus Deploy
&lt;/h2&gt;

&lt;p&gt;Unlike most of the competition competing in this space, Octopus Deploy focuses on just the continuous deployment side of things, leaving the continuous integration aspect to other software solutions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxckv3ws0ysh74rtezaah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxckv3ws0ysh74rtezaah.png" width="418" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Traditionally Octopus Deploy provided support to just the .NET world, but it has since expanded and now supports a fair amount of other languages. The cheapest cloud plan starts from $45.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gitlab
&lt;/h2&gt;

&lt;p&gt;GitLab offers a complete solution to continuous integration, delivery and deployment, all inside a single interface that integrates with their Git source control system. It’s an impressive array of tools and, similar to Atlassian’s Bamboo, if you are already using GitLab then the all in one integration process would be a strong deciding factor here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F010thgymw3u6a1gahs2s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F010thgymw3u6a1gahs2s.png" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The price point for GitLab is also very reasonable compared to the competition. You can choose to self-host or use GitLab’s SaaS, with prices ranging from free up to just $99.&lt;/p&gt;

&lt;h2&gt;
  
  
  DeployBot
&lt;/h2&gt;

&lt;p&gt;DeployBot feels like a much simple and easy solution then many of the other options we’ve mentioned here. All you need to do is sign up for an account and hook up your source code repository and hosting solution and then you are ready to go with manual or automatic deploys.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4brpdzbsgmnb27rj12v2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4brpdzbsgmnb27rj12v2.png" width="574" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And whilst we are talking about simplicity, the pricing is also simple too! There is a Free, Basic ($15), Plus ($25) and Premium ($50) plan to cover a wide range of needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;As you can see there are a lot of options available for continuous deployment in 2019. Furthermore, these options often overlap with continuous integration options making for a very confusing landscape. What works best for you will depend on a number of factors such as cost, features, and how it fits in with the other software in your stack. We’ve summarized the key points for you in the table below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn762qcy4isrh0e8epcu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn762qcy4isrh0e8epcu1.png" width="700" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cd</category>
      <category>ci</category>
      <category>continuousdeployment</category>
    </item>
    <item>
      <title>What’s new in Rails 6?</title>
      <dc:creator>Matthew Chigira</dc:creator>
      <pubDate>Wed, 17 Jul 2019 02:38:44 +0000</pubDate>
      <link>https://dev.to/scoutapm/what-s-new-in-rails-6-5b5f</link>
      <guid>https://dev.to/scoutapm/what-s-new-in-rails-6-5b5f</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://scoutapm.com/blog/whats-new-in-rails-6" rel="noopener noreferrer"&gt;Scout blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With the official release of Rails 6 just around the corner, we round up all the major new features coming your way. It is an exciting release due to some big features coming upstream from the Basecamp and GitHub projects. Amongst the many minor updates, useful tweaks and bug fixes, Rails 6 will ship with two completely new frameworks: &lt;strong&gt;ActionText&lt;/strong&gt; and &lt;strong&gt;ActionMailbox&lt;/strong&gt;, and two big scalable-by-default features: &lt;strong&gt;parallel testing&lt;/strong&gt; and &lt;strong&gt;multiple database support&lt;/strong&gt;. So set your Gemfile to get Rails 6.0.0.rc1 and let’s get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Multiple Database Support
&lt;/h2&gt;

&lt;p&gt;Rails now &lt;strong&gt;supports switching between multiple databases&lt;/strong&gt;. This is a very small change to the codebase, but it’s a feature that many developers are going to find really useful. The contribution came upstream from the GitHub developers, and provides an API for easily switching between multiple databases.&lt;/p&gt;

&lt;p&gt;Possible use cases for this new feature would be having a read-only version of your database to use in areas known for having slow queries. Or another example would be if you had the need to write to different databases depending on which controller is executing.&lt;/p&gt;

&lt;p&gt;First of all, you need to add the extra databases to your &lt;strong&gt;database.yml&lt;/strong&gt; config file like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="ss"&gt;development:
  main:
    &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;defaults&lt;/span&gt;
    &lt;span class="ss"&gt;database: &lt;/span&gt;&lt;span class="n"&gt;main_db&lt;/span&gt;
  &lt;span class="ss"&gt;slow_queries:
    &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;defaults&lt;/span&gt;
    &lt;span class="ss"&gt;database: &lt;/span&gt;&lt;span class="n"&gt;readonly_main_db&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then specify at the model-level which database(s) you want to use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;User&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ApplicationRecord&lt;/span&gt;
  &lt;span class="n"&gt;connects_to&lt;/span&gt; &lt;span class="ss"&gt;database: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="ss"&gt;writing: &lt;/span&gt;&lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;reading: &lt;/span&gt;&lt;span class="n"&gt;slow_queries&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then it’s just one line of code to temporarily switch between databases inside a block!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;User&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connected_to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;role: :reading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="c1"&gt;# Do something that you know will take a long time&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ActionText
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;ActionText&lt;/strong&gt; is one of two brand-new frameworks extracted out of Basecamp and coming to Rails 6 (the other being ActionMailbox). It brings &lt;strong&gt;rich-text support&lt;/strong&gt; to your rails applications, all made possible with the very stylish &lt;strong&gt;Trix&lt;/strong&gt; editor. All you have to do is add a line code to your model (has_rich_text :column_name) and then you can use the ‘rich_text_area’ field in your view.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27vtw3wri1eji9ei8e5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27vtw3wri1eji9ei8e5z.png" width="800" height="669"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Trix editor will capture rich-text information (such as bold text, headings, images etc.) and save this data into your desired storage solution, along with saving associated metadata into some new tables (note that this means that ActionText requires you to be using ActiveStorage).&lt;/p&gt;

&lt;h2&gt;
  
  
  ActionMailbox
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;ActionMailbox&lt;/strong&gt; is an exciting new framework that allows you to route &lt;strong&gt;incoming mail&lt;/strong&gt; into &lt;strong&gt;controller-like mailboxes&lt;/strong&gt; for processing. This is another feature that comes extracted out of Basecamp, ready to use in Rails 6. Incoming mail is stored in a database table called InboundEmail and ActiveJob is utilized to tie the automation together.&lt;/p&gt;

&lt;p&gt;There are lots of exciting possible use cases for this, and I’m sure many ideas will pop into your head when you start to think about it. For example, when a user replies to an automated email that your application sent notifying them about a comment. The user could reply to that email, and your application could process that email and turn it into a reply comment in your application automatically. As you can see, this is very powerful and flexible feature that we can’t wait to get our hands on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Parallel Testing Support
&lt;/h2&gt;

&lt;p&gt;Another &lt;strong&gt;scalable-by-default&lt;/strong&gt; feature coming to Rails 6 is &lt;strong&gt;support for parallel testing&lt;/strong&gt;. By running the test suite on multiple threads (each with their own version of the test database), this new feature will enable &lt;strong&gt;faster test suite run times&lt;/strong&gt; and efficiency. Maybe not the most exciting feature, but certainly one which large projects will be very thankful for.&lt;/p&gt;

&lt;p&gt;You can specify the number of threads to utilise in an environmental variable. This is useful for working with your CI system, which would typically have a different set-up from your local machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="no"&gt;PARALLEL_WORKERS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt; &lt;span class="n"&gt;rails&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, to make universal changes, you can add this line of code to the parent test class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ActiveSupport::TestCase&lt;/span&gt;
  &lt;span class="n"&gt;parallelize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;workers: &lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Webpack
&lt;/h2&gt;

&lt;p&gt;It’s nice to see Rails move with the times and never be afraid to abandon ideas that have grown irrelevant. These days, many projects are using front-end Javascript frameworks instead of the default Rails view layer. This has resulted in the existing Rails Asset Pipeline no longer being a good match for many people.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fss671urgefgfoqjbmwhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fss671urgefgfoqjbmwhx.png" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whereas Webpack on the other hand, has become the industry standard in the front-end community in recent years, and so it makes sense to move towards that architecture going forward. The Webpacker gem has been around for some time, but now in Rails 6 it will &lt;strong&gt;become the default solution&lt;/strong&gt; for Javascript asset bundling in Rails.&lt;/p&gt;

&lt;h2&gt;
  
  
  Zeitwerk
&lt;/h2&gt;

&lt;p&gt;Last but surely not least, here’s an interesting one to keep an eye on: Zeitwerk, a new and improved, &lt;strong&gt;thread-safe code loader&lt;/strong&gt; for Rails. This replaces the existing code loader which has been around since 2004, which has done a noble job, but has some major limitations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next?
&lt;/h2&gt;

&lt;p&gt;As you can see, there is lots to be excited for with the release of Rails 6. And besides these major new features, there are many more minor features and fixes too, so be sure to take a read through the documentation before you upgrade. So why not try out the new Rails 6.0.0.rc1 gem on a test project today?&lt;/p&gt;

</description>
      <category>rails</category>
      <category>ruby</category>
    </item>
    <item>
      <title>Is your Django app slow? Think like a data scientist, not an engineer</title>
      <dc:creator>Derek Haynes</dc:creator>
      <pubDate>Mon, 22 Apr 2019 15:56:57 +0000</pubDate>
      <link>https://dev.to/scoutapm/is-your-django-app-slow-think-like-a-data-scientist-not-an-engineer-5bnb</link>
      <guid>https://dev.to/scoutapm/is-your-django-app-slow-think-like-a-data-scientist-not-an-engineer-5bnb</guid>
      <description>&lt;p&gt;&lt;em&gt;This post originally appeared on the &lt;a href="https://scoutapm.com/blog/is-your-django-app-slow-ask-a-data-scientist-not-an-engineer"&gt;Scout Blog&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I'm an engineer by trade. &lt;strong&gt;I rely on intuition&lt;/strong&gt; when investigating a slow Django app. I've solved a lot of performance issues over the years and the short cuts my brain takes often work. However, &lt;strong&gt;intuition can fail&lt;/strong&gt;. It can fail hard in complex Django apps with many layers (ex: an SQL database, a NoSQL database, ElasticSearch, etc) and many views. There's too much noise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instead of relying on an engineer's intuition, what if we approached a performance investigation like a data scientist?&lt;/strong&gt; In this post, I'll walk through a performance issue as a data scientist, not as an engineer. I'll share time series data from a real performance issue and my Google Colab notebook I used to solve the issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem part I: too much noise
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MjAb0kFZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/Q1i3XCC5SACcHvdYXVtM" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MjAb0kFZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/Q1i3XCC5SACcHvdYXVtM" alt="undefined"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photo Credit: pyc &lt;a href="https://www.flickr.com/photos/pyc/4963466757/in/photolist-8yB4K4-pBRCcq-nZXkWu-bWsES4-29rmwqJ-boe2Vk-7vsTeb-9N2guL-ng4zzC-8hE5Cq-zouzT-areoZ2-nLQoy7-8hpHiQ-a5r9xQ-8hE5Ku-ofvatX-n3mHF-fHSwDM-inbd3a-7oT1q-4UsU2J-2526RGe-8LaPMa-32AUXJ-bka1JE-bka1Kf-9aBMJf-dqr3qB-dqrdUr-5sx8D-o4dwhQ-hR6gY-prMTu-4Tg8va-29MP5Hj-7F2BDK-7F6uoN-7EAp5f-7EAnS1-7E4qdX-JoapDG-7F6v6y-zosvP-7DrJvR-7Ewy2F-7F6v9N-6g9j2o-7Ewvci-7DrFoZ"&gt;Carnoy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Many performance issues are caused by one of the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;A slow layer&lt;/strong&gt; - just one of many layers (the database, app server, etc) is slow and impacting many views in a Django app.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;A slow view&lt;/strong&gt; - one view is generating slow requests. This has a big impact on the performance profile of the app as a whole.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;A high throughput view&lt;/strong&gt; - a rarely used view suddenly sees an influx of traffic and triggers a spike in the overall response time of the app.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;When investigating a performance issue, I start by looking for correlations.&lt;/strong&gt; Are any metrics getting worse at the same time? This can be hard: &lt;strong&gt;a modest Django app with 10 layers and 150 views has 3,000 unique combinations of time series data sets to compare!&lt;/strong&gt; If my intuition can't quickly isolate the issue, it's close to impossible to isolate things on my own. &lt;/p&gt;

&lt;h2&gt;
  
  
  The problem part II: phantom correlations
&lt;/h2&gt;

&lt;p&gt;Determining if two time series data sets are correlated is &lt;a href="https://www.kdnuggets.com/2015/02/avoiding-common-mistake-time-series.html"&gt;notoriously fickle&lt;/a&gt;. For example, doesn't it look like the number of films Nicolas Cage appears in each year and swimming pool drownings are correlated?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XoyXvg8o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/MGY8SSIhS2eyXEaAhuMI" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XoyXvg8o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/MGY8SSIhS2eyXEaAhuMI" alt="chart.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.indiebound.org/book/9780316339438"&gt;Spurious Correlations&lt;/a&gt; is an entire book related to these seemingly clear but unconnected correlations! So, &lt;strong&gt;why do trends appear to trigger correlations when looking at a&lt;/strong&gt; timeseries &lt;strong&gt;chart?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's an example: five years ago, my area of Colorado &lt;a href="https://999thepoint.com/photos-from-estes-park-colorado-flood-2013/"&gt;experienced a historic flood&lt;/a&gt;. It shut off one of the two major routes into Estes Park, the gateway to Rocky Mountain National Park. If you looked at sales receipts across many different types of businesses in Estes Park, you'd see a sharp decline in revenue while the road was closed and an increase in revenue when the road reopened. This doesn't mean that revenue amongst different stores was correlated. The stores were just impacted by a mutual dependency: a closed road!&lt;/p&gt;

&lt;p&gt;One of the easiest ways to remove a trend from a time series is to calculate the &lt;em&gt;&lt;a href="https://people.duke.edu/~rnau/411diff.htm"&gt;first difference&lt;/a&gt;&lt;/em&gt;. To calculate the first difference, you subtract from each point the point that came before it:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;y'(t) = y(t) - y(t-1)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;That's great, but my visual brain can't re-imagine a time series into its first difference when staring at a chart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter Data Science
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;We have a data science problem, not a performance problem!&lt;/strong&gt; We want to identify any highly correlated time series metrics. We want to see past misleading trends. To solve this issue, we'll use the following tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;a href="https://colab.research.google.com"&gt;Google Colab&lt;/a&gt;, a shared notebook environment&lt;/li&gt;
&lt;li&gt;  Common Python data science libraries like &lt;a href="https://pandas.pydata.org/"&gt;Pandas&lt;/a&gt; and &lt;a href="https://www.scipy.org/"&gt;SciPy&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  Performance data collected from &lt;a href="https://scoutapp.com"&gt;Scout&lt;/a&gt;, an Application Performance Monitoring (APM) product. &lt;a href="https://apm.scoutapp.com/users/sign_up"&gt;Signup&lt;/a&gt; for a free trial if you don't have an account yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'll walk through a &lt;a href="https://colab.research.google.com/drive/1VhCwtGLc-tWhB_gbGuBo_cfs7Q5J4dCI"&gt;shared notebook on Google Colab&lt;/a&gt;. You can easily save a copy of this notebook, enter your metrics from Scout, and identify the most significant correlations in your Django app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: view the app in Scout
&lt;/h2&gt;

&lt;p&gt;I login to Scout and see the following overview chart:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NDedqRIz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/OxGdJmY0RM2KypKJFcCm" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NDedqRIz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/OxGdJmY0RM2KypKJFcCm" alt="undefined"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Time spent in SQL queries jumped significantly from 7pm - 9:20pm. Why? This is scary as almost every view touches the database!&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: load layers time series data into Pandas
&lt;/h2&gt;

&lt;p&gt;To start, I want to look for correlations between the layers (ex: SQL, MongoDB, View) and the average response time of the Django app. There are fewer layers (10) than views (150+) so it's a simpler place to start. I'll grab this time series data from Scout and initialize a Pandas Dataframe. I'll leave this data wrangling &lt;a href="https://colab.research.google.com/drive/1VhCwtGLc-tWhB_gbGuBo_cfs7Q5J4dCI#scrollTo=8KKzj89nfZ-9"&gt;to the notebook&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After loading the data into a Pandas Dataframe we can &lt;a href="https://colab.research.google.com/drive/1VhCwtGLc-tWhB_gbGuBo_cfs7Q5J4dCI#scrollTo=j4PKi7wIfZ_L"&gt;plot these layers&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qP1laKOW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/eZVXiFIcSiGAdVBrhrvb" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qP1laKOW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/eZVXiFIcSiGAdVBrhrvb" alt="plot.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: layer correlations
&lt;/h3&gt;

&lt;p&gt;Now, let's see if any layers are correlated to the Django app's overall average response time. Before comparing each layer time series to the response time, we want to calculate the first difference of each time series. With Pandas, we can do this very easily via the &lt;code&gt;diff()&lt;/code&gt; function:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df.diff()
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;After calculating the first difference, we can then look for &lt;a href="https://en.wikipedia.org/wiki/Correlation_coefficient"&gt;correlations&lt;/a&gt; between each time series via the&lt;code&gt;corr()&lt;/code&gt; function. The correlation value ranges from −1 to +1, where ±1 indicates the strongest possible agreement and 0 the strongest possible disagreement.&lt;/p&gt;

&lt;p&gt;My notebook generates the &lt;a href="https://colab.research.google.com/drive/1VhCwtGLc-tWhB_gbGuBo_cfs7Q5J4dCI#scrollTo=hwm0KcDzfZ_Z"&gt;following result&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X9w8AjeQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/PKRnV8tXSPqWP4X1Wxwg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X9w8AjeQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/PKRnV8tXSPqWP4X1Wxwg" alt="corr_layer.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;SQL&lt;/code&gt; appears to be correlated to the overall response time of the Django app. To be sure, let's determine the Pearson Coefficient &lt;a href="http://www.eecs.qmul.ac.uk/~norman/blog_articles/p_values.pdf"&gt;p-value.&lt;/a&gt; A low value (&amp;lt; 0.05) indicates that the overall response time is highly likely to be correlated to the SQL layer:&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df_diff = df.diff().dropna()
p_value = scipy.stats.pearsonr(df_diff.total.values, df_diff[top_layer_correl].values)[1]
print("first order series p-value:", p_value)
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;The p-value is just &lt;code&gt;1.1e-54&lt;/code&gt;. I'm very confident that slow SQL queries are related to an overall slow Django app.&lt;/strong&gt; It's always the database, right?&lt;/p&gt;

&lt;p&gt;Layers are just one dimension we should evaluate. Another is the response time of the Django views.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Rinse+repeat for Django view response times
&lt;/h2&gt;

&lt;p&gt;The overall app response time could increase if a view starts responding slowly. We can see if this is happening by looking for correlations in our view response times versus the overall app response time. We're using the exact same process as we used for layers, just swapping out the layers for time series data from each of our views in the Django app:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QVB93BVq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/VNR7BsT1SSKwqwCG3jjG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QVB93BVq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/VNR7BsT1SSKwqwCG3jjG" alt="undefined"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After calculating the first difference of each time series, &lt;code&gt;apps/data&lt;/code&gt; does appear to be correlated to the overall app response time. &lt;strong&gt;With a p-value of just &lt;code&gt;1.64e-46&lt;/code&gt;, &lt;code&gt;apps/data&lt;/code&gt; is very likely to be correlated to the overall app response time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We're almost done extracting the signal from the noise. We should check to see if traffic to any views triggers slow response times.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Rinse+repeat for Django view throughputs
&lt;/h2&gt;

&lt;p&gt;A little-used, expensive view could hurt the overall response time of the app if throughput to that view suddenly increases. For example, this could happen if a user writes a script that quickly reloads an expensive view. To determine correlations we'll use the exact same process as before, just swapping in the throughput time series data for each Django view:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nPORDF0Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/LmHOK63ATuiFISdpwUg1" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nPORDF0Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.buttercms.com/LmHOK63ATuiFISdpwUg1" alt="undefined"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;endpoints/sparkline&lt;/code&gt; appears to have a small correlation. The p-value is &lt;code&gt;0.004&lt;/code&gt;, which means there is a 4 in 1,000 chance that there &lt;em&gt;is not&lt;/em&gt; a correlation between traffic to &lt;code&gt;endpoints/sparkline&lt;/code&gt; and the overall app response time. So, it does appear that traffic to the &lt;code&gt;endpoints/sparkline&lt;/code&gt; view triggers slower overall app response times, but it is less certain than our other two tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Using data science, we've been able to sort through far more time series metrics than we ever could with intuition. We've also been able to make our calculations without misleading trends muddying the waters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We know that our Django app response times are:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  strongly correlated to the performance of our SQL database.&lt;/li&gt;
&lt;li&gt;  strongly correlated to the response time of our &lt;code&gt;apps/data&lt;/code&gt; view.&lt;/li&gt;
&lt;li&gt;  correlated to &lt;code&gt;endpoints/sparkline&lt;/code&gt; traffic. While we're confident in this correlation given the low p-value, it isn't as strong as the previous two correlations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Now it’s time for the engineer!&lt;/strong&gt; With these insights in hand, I’d:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  investigate if the database server is being impacted by something outside of the application. For example, if we have just one database server, a backup process could slow down all queries.&lt;/li&gt;
&lt;li&gt;  investigate if the composition of the requests to the &lt;code&gt;apps/data&lt;/code&gt; view has changed. For example, has a customer with lots of data started hitting this view more? Scout's &lt;a href="http://help.apm.scoutapp.com/#trace-explorer"&gt;Trace Explorer&lt;/a&gt; can help investigate this high-dimensional data.&lt;/li&gt;
&lt;li&gt;  hold off investigating the performance of &lt;code&gt;endpoints/sparkline&lt;/code&gt; as its correlation to the overall app response time wasn't as strong.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;It's important to realize when all of that hard-earned experience doesn't work. My brain simply can't analyze thousands of time series data sets the way our data science tools can.&lt;/strong&gt; It's OK to reach for another tool.&lt;/p&gt;

&lt;p&gt;If you'd like to work through this problem on your own, check out &lt;a href="https://colab.research.google.com/drive/1VhCwtGLc-tWhB_gbGuBo_cfs7Q5J4dCI"&gt;my shared Google Colab notebook&lt;/a&gt; I used when investigating this issue. &lt;strong&gt;Just import your own data from &lt;a href="https://scoutapp.com"&gt;Scout&lt;/a&gt; next time you have a performance issue and let the notebook do the work for you!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>python</category>
      <category>django</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
