<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andreas Wittig</title>
    <description>The latest articles on DEV Community by Andreas Wittig (@andreaswittig).</description>
    <link>https://dev.to/andreaswittig</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/andreaswittig"/>
    <language>en</language>
    <item>
      <title>Antivirus for File Uploads: Add Virus and Malware Scan to Any App</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Thu, 26 Feb 2026 19:13:02 +0000</pubDate>
      <link>https://dev.to/andreaswittig/antivirus-for-file-uploads-add-virus-and-malware-scan-to-any-app-5cnm</link>
      <guid>https://dev.to/andreaswittig/antivirus-for-file-uploads-add-virus-and-malware-scan-to-any-app-5cnm</guid>
      <description>&lt;p&gt;As a developer, you are aware of the fact that all user input needs to be validated carefully. But, how do you ensure users and 3rd parties are not uploading files infected by viruses, trojans, ransomware, or other kinds of malware? You don't? Let me show you how to add virus and malware scanning to any app with ease.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Distribution Risk:&lt;/strong&gt; Your app shouldn't be a "Patient Zero" for spreading malware to others.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lateral Movement:&lt;/strong&gt; Once a malicious file is on your server, it can be used to attack your infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Compliance "Must-Have":&lt;/strong&gt; If you’re dealing with PCI DSS, ISO 27001, HIPAA, or SOC 2, malware scanning isn't just a good idea—it’s often a requirement.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Virus and Malware Scan API
&lt;/h2&gt;

&lt;p&gt;Modern virus and malware scanning is just an API call away.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User or 3rd party sends file to your app.&lt;/li&gt;
&lt;li&gt;App calls the Virus and Malware Scan API.&lt;/li&gt;
&lt;li&gt;Virus and Malware Scan API scans the file and returns the scan result.&lt;/li&gt;
&lt;li&gt;Depending on the scan result, app proceeds with quarantining/deleting or processing the file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48uwdpcti13ib6ysoaw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48uwdpcti13ib6ysoaw2.png" alt="User uploads file, app submits a scan job, Virus and Malware Scan API scans the file and reports back the results" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the following, I will demonstrate how to use the &lt;a href="https://attachmentav.com/solution/virus-malware-scan-api/" rel="noopener noreferrer"&gt;Virus and Malware Scan API by attachmentAV&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An API key and subscription is required to access the Virus and Malware Scan API by attachmentAV. &lt;a href="https://attachmentav.com/help/virus-malware-scan-api/setup-guide/" rel="noopener noreferrer"&gt;Learn more.&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Scan a file with Virus and Malware Scan API
&lt;/h2&gt;

&lt;p&gt;The following snippet shows how to scan a file by calling the API with &lt;code&gt;curl&lt;/code&gt;. Replace &lt;code&gt;&amp;lt;API_KEY_PLACEHOLDER&amp;gt;&lt;/code&gt; with your API key and &lt;code&gt;@path/to/file&lt;/code&gt; with the path to the file that you want to scan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'x-api-key: &amp;lt;API_KEY_PLACEHOLDER&amp;gt;'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/octet-stream'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'@path/to/file'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  https://eu.developer.attachmentav.com/v1/scan/sync/binary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API responds with the following result, for example.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"status":"clean","size":73928372,"realfiletype":"Adobe Portable Document Format (PDF)"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Implementing virus scanning with Java
&lt;/h2&gt;

&lt;p&gt;Use the Java SDK &lt;code&gt;virus-scan-sdk&lt;/code&gt; to integrate attachmentAV with your Java application.&lt;/p&gt;

&lt;p&gt;The SDK is available in the Maven Central repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;com.attachmentav&lt;span class="nt"&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;virus-scan-sdk&lt;span class="nt"&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;version&amp;gt;&lt;/span&gt;0.6.0&lt;span class="nt"&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following snippet illustrates how to send a file to the Virus and Malware Scan API. Don't forget to replace &lt;code&gt;&amp;lt;API_KEY_PLACEHOLDER&amp;gt;&lt;/code&gt; with the API belonging to your subscription. Also replace &lt;code&gt;/path/to/file&lt;/code&gt; with the path to the file you want to scan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.attachmentav.api.AttachmentAvApi&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.attachmentav.client.ApiClient&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.attachmentav.client.ApiException&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.attachmentav.client.Configuration&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;com.attachmentav.model.ScanResult&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;java.io.File&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// ...&lt;/span&gt;

&lt;span class="nc"&gt;ApiClient&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Configuration&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getDefaultApiClient&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setApiKey&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"&amp;lt;API_KEY_PLACEHOLDER&amp;gt;"&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="nc"&gt;AttachmentAvApi&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AttachmentAvApi&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
&lt;span class="nc"&gt;ScanResult&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;api&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;scanSyncBinaryPost&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;File&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/path/to/file"&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;
&lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;out&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;println&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Scan Result: "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getStatus&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Adding malware protection to JavaScript or TypeScript app
&lt;/h2&gt;

&lt;p&gt;Java isn't for you, but you are all-in on TypeScript or JavaScript? Here you go. There's an SDK for TS/JS as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i @attachmentav/virus-scan-sdk-ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Find an example on how to send a file to the Virus and Malware Scan API in the following. Don't forget to replace &lt;code&gt;&amp;lt;API_KEY_PLACEHOLDER&amp;gt;&lt;/code&gt; with the API belonging to your subscription. Also replace &lt;code&gt;/path/to/file&lt;/code&gt; with the path to the file you want to scan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;AttachmentAVApi&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Configuration&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@attachmentav/virus-scan-sdk-ts&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;readFileSync&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;node:fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Blob&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;node:buffer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Configuration&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;API_KEY_PLACEHOLDER&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;api&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AttachmentAVApi&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;scanResult&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;scanSyncBinaryPost&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Blob&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nf"&gt;readFileSync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/path/to/file&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Sync binary scan result:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;scanResult&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Check files for viruses and malware with Python
&lt;/h2&gt;

&lt;p&gt;Neither, Java nor JS/TS are for you. Here is my last example for all Python developers out there.&lt;/p&gt;

&lt;p&gt;The following listing explains how to scan a file for virus and malware by using the attachmentAV SDK.&lt;/p&gt;

&lt;p&gt;First, install the package.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;attachmentav-virus-malware-scan-sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, add the following lines to your Python code. Don't forget to replace &lt;code&gt;&amp;lt;API_KEY_PLACEHOLDER&amp;gt;&lt;/code&gt; with the API belonging to your subscription. Also replace &lt;code&gt;/path/to/file&lt;/code&gt; with the path to the file you want to scan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;attachmentav&lt;/span&gt;

&lt;span class="n"&gt;configuration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;attachmentav&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Configuration&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;configuration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;apiKeyAuth&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;API_KEY_PLACEHOLDER&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;attachmentav&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ApiClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;configuration&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;api_client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
  &lt;span class="n"&gt;api_instance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;attachmentav&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;AttachmentAVApi&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_client&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/path/to/file&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;file_content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;scan_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;api_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;scan_sync_binary_post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file_content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scan_result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrapping Up: Better Safe Than Sorry
&lt;/h2&gt;

&lt;p&gt;Securing file uploads should neither be a secondary thought nor a massive infrastructure headache. Whether you are building a small MVP or scaling an enterprise platform, the Virus and Malware Scan API by attachmentAV bridges the gap between "hoping for the best" and actually being protected.&lt;/p&gt;

&lt;p&gt;Add virus and malware protection to your app today! &lt;a href="https://attachmentav.com/help/virus-malware-scan-api/setup-guide/" rel="noopener noreferrer"&gt;Get started with attachmentAV's Virus and Malware Scan API.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>java</category>
    </item>
    <item>
      <title>Connect to your EC2 instance using SSH the modern way</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Wed, 02 Feb 2022 11:26:42 +0000</pubDate>
      <link>https://dev.to/aws-builders/connect-to-your-ec2-instance-using-ssh-the-modern-way-5fen</link>
      <guid>https://dev.to/aws-builders/connect-to-your-ec2-instance-using-ssh-the-modern-way-5fen</guid>
      <description>&lt;p&gt;Did you know that establishing an SSH connection with an EC2 instance is possible without configuring a key pair and allowing inbound traffic on port 22? How is that possible? The secret is a combination of EC2 Instance Connect and Systems Manager (SSM). When following this article, connecting with an EC2 instance is as simple as typing &lt;code&gt;ssh i-059499e6abc8fbe6b&lt;/code&gt; into your terminal.&lt;/p&gt;

&lt;p&gt;First of all, the following video demonstrates how to establish an SSH connection with an EC2 instance by using EC2 Instance Connect and Systems Manager.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/w-yVPzSbb0c"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The following diagram explains the approach in more detail.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The user sends her public key to EC2 Instance Connect using the AWS CLI.&lt;/li&gt;
&lt;li&gt;EC2 Instance connect pushes the key to the EC2 instance. The key remains for 60 seconds.&lt;/li&gt;
&lt;li&gt;An SSM agent running on the EC2 instance establishes a bidirectional channel with the SSM backend.&lt;/li&gt;
&lt;li&gt;The user establishes an SSH connection through a Websocket between Terminal and SSM.&lt;/li&gt;
&lt;li&gt;Authentication and authorization for the user and the SSM agent is IAM's job.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xn991kc2ktc7aviru2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xn991kc2ktc7aviru2o.png" alt="Establishing an SSH connection with an EC2 instance by using EC2 Instance Connect and Systems Manager"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How do you make this approach happen on your local machine and EC2 instances?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure the SSM agent is running on your EC2 instances - which is already the case for Amazon Linux and Amazon Linux 2.&lt;/li&gt;
&lt;li&gt;Double check that the SSM agent running on your machines can connect with the SSM API through an internet gateway, NAT gateway, or VPC endpoint.&lt;/li&gt;
&lt;li&gt;Attach an IAM role - granting the SSM agent access to the SSM API - to each EC2 instance to&lt;/li&gt;
&lt;li&gt;Install the AWS CLI and the session manager plugin on your local machine.&lt;/li&gt;
&lt;li&gt;Configure your SSH client to use EC2 Instance Connect and Systems Manager to establish a tunnel for SSH connections.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Are you confused about the different options to connect by using EC2 Instance Connect and Systems Manager? Check out Petri's comparision: &lt;a href="https://carriagereturn.nl/aws/ec2/ssh/connect/ssm/2019/07/26/connect.html" rel="noopener noreferrer"&gt;EC2 Instance Connect vs. SSM Session Manager&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  IAM role required by SSM agent
&lt;/h2&gt;

&lt;p&gt;As mentioned before, the SSM agent is running on most EC2 instances already. That's because the agent is bundled into Amazon Linux, Amazon Linux 2, SUSE Linux Enterprise Server (12 and 15), and Ubuntu Server (16.04, 18.04, and 20.04) by default. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Learn how to &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-manual-agent-install.html" rel="noopener noreferrer"&gt;Manually install SSM Agent on EC2 instances for Linux&lt;/a&gt; from the AWS documentation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, the SSM agent requires an IAM role attached to the EC2 instance granting access to the SSM API. That's not a default, so you probably need to create a new IAM role or add a policy to existing roles.&lt;/p&gt;

&lt;p&gt;You can either use the predefined policy &lt;code&gt;AmazonSSMManagedInstanceCore&lt;/code&gt; (&lt;code&gt;arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore&lt;/code&gt;) managed by AWS. Or create your own managed or in-line policy with the following contents.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
 "Version": "2012-10-17",
 "Statement": [
 {
 "Effect": "Allow",
 "Action": [
 "ssmmessages:*",
 "ec2messages:*",
 "ssm:UpdateInstanceInformation"
 ],
 "Resource": "*"
 }
 ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install the AWS CLI and session manager plugin
&lt;/h2&gt;

&lt;p&gt;You will use the AWS Command Line Interface (CLI) to push your public key via EC2 Instance Connect and establish a tunnel for your SSH connection with the EC2 instance.&lt;/p&gt;

&lt;p&gt;Therefore, make sure to install the AWS CLI on your local machine. Besides that, you need to install the session manager plugin.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;Installing or updating the latest version of the AWS CLI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html" rel="noopener noreferrer"&gt;Install the Session Manager plugin for the AWS CLI&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I prefer &lt;code&gt;brew&lt;/code&gt; to install the AWS CLI and the session manager plugin on macOS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install awscli session-manager-plugin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Configuring your SSH client
&lt;/h2&gt;

&lt;p&gt;Next, you need to configure your SSH client. To do so, edit the &lt;code&gt;~/.ssh/config&lt;/code&gt; file on your local machine. Please add the following snippet configuring connections for all hosts, starting with &lt;code&gt;i-&lt;/code&gt; at the end of the file. Do not forget to replace &lt;code&gt;$PRIVATE_KEY&lt;/code&gt; and &lt;code&gt;$PUBLIC_KEY&lt;/code&gt; with the path to your private and public key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# SSH over Session Manager
host i-*
 IdentityFile $PRIVATE_KEY
 User ec2-user
 ProxyCommand sh -c "aws ec2-instance-connect send-ssh-public-key --instance-id %h --instance-os-user %r --ssh-public-key 'file://$PUBLIC_KEY' --availability-zone '$(aws ec2 describe-instances --instance-ids %h --query 'Reservations[0].Instances[0].Placement.AvailabilityZone' --output text)' &amp;amp;&amp;amp; aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For me, the snippet looks as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# SSH over Session Manager
host i-*
 IdentityFile ~/.ssh/id_ed25519
 User ec2-user
 ProxyCommand sh -c "aws ec2-instance-connect send-ssh-public-key --instance-id %h --instance-os-user %r --ssh-public-key 'file://~/.ssh/id_ed25519.pub' --availability-zone '$(aws ec2 describe-instances --instance-ids %h --query 'Reservations[0].Instances[0].Placement.AvailabilityZone' --output text)' &amp;amp;&amp;amp; aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;ProxyCommand&lt;/code&gt; contains a one-liner that I'd like to explain in more detail.&lt;/p&gt;

&lt;p&gt;First, we need to find out the availability zone the instance is running in. The command &lt;code&gt;aws ec2 describe-instances&lt;/code&gt; returns the necessary information. Please note that &lt;code&gt;%h&lt;/code&gt; will be replaced with the host, for example, &lt;code&gt;i-059499e6abc8fbe6b&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-instances --instance-ids %h --query 'Reservations[0].Instances[0].Placement.AvailabilityZone' --output text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, you need to push your public key to the EC2 instance. That's why you need to execute the &lt;code&gt;aws ssm start-session&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2-instance-connect send-ssh-public-key --instance-id %h --instance-os-user %r --ssh-public-key 'file://~/.ssh/id_ed25519.pub' --availability-zone '$(...)'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Last but not least, you need to establish the SSH session through a WebSocket connection. That's the job of the &lt;code&gt;aws ssm start-session&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connect to your EC2 instance using SSH
&lt;/h2&gt;

&lt;p&gt;You are ready to connect with your EC2 instance using SSH. All you need to do is type &lt;code&gt;ssh&lt;/code&gt; followed by an EC2 instance ID into your terminal.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh i-059499e6abc8fbe6b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By the way, you can even copy files with &lt;code&gt;scp&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh example.txt i-059499e6abc8fbe6b:/tmp/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. I hope you enjoy this approach to connect to your EC2 instances using SSH as much as I do.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Did you run into any issues while following my instructions? Do you love the approach to connect with your EC2 instances? Please let me know!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>AWS Cost Optimization 101</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Mon, 13 Jan 2020 15:01:24 +0000</pubDate>
      <link>https://dev.to/andreaswittig/aws-cost-optimization-101-3ji8</link>
      <guid>https://dev.to/andreaswittig/aws-cost-optimization-101-3ji8</guid>
      <description>&lt;p&gt;The beginning of the year is the perfect time to clean up and optimize. This also applies to your AWS bill. I've composed practical tips on how to cut costs with small effort.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IWECIT6P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cloudonaut.io/images/2020/01/cut-costs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IWECIT6P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cloudonaut.io/images/2020/01/cut-costs.jpg" alt="AWS Cost Optimization 101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The good thing about AWS: you typically pay per usage. The bad thing about AWS: understanding the pricing models of all the AWS services is hard. Little self-promotion: our consulting firm &lt;a href="https://widdix.net/"&gt;widdix&lt;/a&gt; offers analyzing and optimizing your AWS bill.&lt;/p&gt;

&lt;p&gt;The following mind map provides guidance for reducing costs based on my experience from analyzing and reducing AWS bills for various clients. &lt;a href="https://cloudonaut.io/images/2020/01/aws-cost-optimization.pdf"&gt;Download the mind map as a PDF file&lt;/a&gt; for better readability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HX50dXjT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cloudonaut.io/images/2020/01/aws-cost-optimization.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HX50dXjT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cloudonaut.io/images/2020/01/aws-cost-optimization.png" alt="Mind Map: AWS Cost Optimization"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me start with the process of analyzing your AWS bill.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use the Cost Explorer to aggregate costs by service.

&lt;ol&gt;
&lt;li&gt;Which services cause the highest costs?&lt;/li&gt;
&lt;li&gt;Do the costs per service match with your assumptions? For example, it is quite unlikely that you want to spend 2x more on CloudWatch than on EC2.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Visualize your spending among the last 12 months.

&lt;ol&gt;
&lt;li&gt;Are costs increasing by a similar amount each month for a specific service? If so, you might be piling up unused resources (e.g., EBS snapshots).&lt;/li&gt;
&lt;li&gt;Any high cost increases not caused by changes to your cloud infrastructure?&lt;/li&gt;
&lt;li&gt;Does the cost increase per month match with your revenue numbers?&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Open your AWS bill for the last three months and drill down into the details.

&lt;ol&gt;
&lt;li&gt;Which resources of a service cause high costs? Justify the costs.&lt;/li&gt;
&lt;li&gt;Do the costs per service match with your estimations?&lt;/li&gt;
&lt;li&gt;Are there any hints for expenses caused by unused resources?&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Primarily, you should watch out for the following aspects.&lt;/p&gt;

&lt;h2&gt;
  
  
  EC2
&lt;/h2&gt;

&lt;p&gt;Purchase Savings Plans for baseline capacity. The deal is simple: you commit to a monthly usage of computing capacity, AWS grants a discount on the on-demand price. Read &lt;a href="https://cloudonaut.io/reduce-your-aws-bill-with-savings-plans/"&gt;Reduce your AWS bill with Savings Plans&lt;br&gt;
&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;p&gt;Identify and terminate unused instances. Boring but very efficient.&lt;/p&gt;

&lt;p&gt;Verify that instance type still reflects the current workload. Check the CloudWatch metrics for CPU, Storage I/O, and networking to come up with a first guess. After that, experiment to test your assumption.&lt;/p&gt;

&lt;p&gt;Verify that the maximum I/O performance of the instance matches the performance of your EBS volumes. Remember that there is a network between your EC2 instance and your EBS volume. The instance type limits the maximum throughput to all attached EBS volumes. Make sure that matches with the configuration of your EBS volumes, where the volume type and provisioned IOPS define the maximum throughput. See &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html#ebs-volume-characteristics"&gt;EBS Volume Types&lt;br&gt;
&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-optimized.html"&gt;EBS–Optimized Instances&lt;/a&gt; for more details.&lt;/p&gt;

&lt;p&gt;Use Spot Instances for stateless and non-production workloads. Keep in mind that AWS might terminate your spot instance anytime before using them in production. Read &lt;a href="https://cloudonaut.io/3-simple-ways-of-saving-up-to-90-of-ec2-costs/"&gt;3 simple ways of saving up to 90% of EC2 costs&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;p&gt;Switching to the latest instance types often cuts costs. For example, migrating from &lt;code&gt;m4.large&lt;/code&gt; to &lt;code&gt;m5.large?&lt;/code&gt; reduces the costs by 4%. On top of that, you get a small performance improvement as well.&lt;/p&gt;

&lt;p&gt;Using AMD- or ARM-based instance types in favor of Intel-based instance types is also worth a look.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Savings potential for AMD-based instance types (e.g., &lt;code&gt;t3a&lt;/code&gt;, &lt;code&gt;m5a&lt;/code&gt;, and &lt;code&gt;r5a&lt;/code&gt;):  10%&lt;/li&gt;
&lt;li&gt;Savings potential for ARM-based instance types: 40%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On the one hand, it is a little bit more work to migrate to an Open Source operating system. Our operating system of choice on AWS is Amazon Linux, a free of charge Linux image maintained by Amazon. On the other hand, the cost savings are enormous.&lt;/p&gt;

&lt;h2&gt;
  
  
  EBS
&lt;/h2&gt;

&lt;p&gt;Commonly, EBS snapshots are piling up. Therefore, delete snapshots created to backup data that are no longer needed. Also, check whether your backup solution deletes old snapshots. Have you written a backup solution with Lambda? Replace it with AWS Backup. Read &lt;a href="https://cloudonaut.io/review-aws-backup/"&gt;Review: AWS Backup - A centralized place for managing backups?&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;p&gt;Delete snapshots belonging to unused AMIs. A typical waste management problem when your deployment pipeline builds AMIs for every commit.&lt;/p&gt;

&lt;p&gt;Search for unused volumes and delete them. Check whether someone (script, Kubernetes, ...) creates volumes automatically and does not clean them up.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3
&lt;/h2&gt;

&lt;p&gt;It's obvious but still valid: delete unnecessary objects and buckets.&lt;/p&gt;

&lt;p&gt;Consider using S3 Intelligent Tiering. Or, if you need to archive data, check out Glacier Deep Archive. Read &lt;a href="https://cloudonaut.io/6-new-ways-to-reduce-your-AWS-bill-with-little-effort/"&gt;6 new ways to reduce your AWS bill with little effort&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;p&gt;Configure life-cycle policies define a retention period for objects. Read &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html"&gt;Object Lifecycle Management&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  VPC
&lt;/h2&gt;

&lt;p&gt;Check costs for NAT gateways. I've seen scenarios where placing EC2 instances into a public subnet was the only option to avoid horrendous traffic costs.&lt;/p&gt;

&lt;p&gt;Also, create VPC endpoints for S3 and DynamoDB. Doing so reduces the traffic processed by your NAT gateways.&lt;/p&gt;

&lt;p&gt;Traffic within the VPC is free? No, it is not. Traffic between Availability Zones is charged at $0.02/GB. Check the traffic costs and think about making changes to your architecture when necessary.&lt;/p&gt;

&lt;p&gt;A VPC endpoint costs $7.20 per AZ and month in US East (N. Virginia). Adding VPC endpoints for ten services in 3 AZs costs  $216 per month. And the processed data is not even included. In general, avoid VPC endpoints when possible. Read &lt;a href="https://cloudonaut.io/6-new-ways-to-reduce-your-AWS-bill-with-little-effort/"&gt;6 new ways to reduce your AWS bill with little effort&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudWatch
&lt;/h2&gt;

&lt;p&gt;Configure a retention period for all log groups. For example, delete log messages after 30 days.&lt;/p&gt;

&lt;p&gt;Check costs for metrics API calls caused by 3rd party tools (e.g., Prometheus, Datadog, ...). Make sure you are only polling metrics that you need and set the polling interval to 5 minutes. &lt;/p&gt;

&lt;p&gt;Each CloudWatch alarms cost $0.10 per month; a CloudWatch dashboard costs $3.00 per month. Therefore, delete needless alarms and dashboards.&lt;/p&gt;

&lt;p&gt;Identify unnecessary custom metrics. For example, configure the CloudWatch Agent only to send metrics that you need for monitoring.&lt;/p&gt;

&lt;p&gt;Check costs for log ingestion. It might be necessary to reduce or filter the log events that you send to CloudWatch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless
&lt;/h2&gt;

&lt;p&gt;Optimize memory configuration for Lambda functions. Check out &lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning"&gt;AWS Lambda Power Tuning&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Use Provisioned Concurrency to reduce costs for high traffic Lambda functions and evaluate HTTP APIs as an alternative to API Gateway. Read &lt;a href="https://cloudonaut.io/reinvent-recap-2019-aws-announcements/"&gt;All you need to know about AWS re:Invent in 2019&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  ECS
&lt;/h2&gt;

&lt;p&gt;Using Fargate allows you to get rid of an overprovisioned fleet of EC2 instances. If using Fargate is not an option, check out the ECS Capacity Provider to scale the fleet of EC2 instances easily. Read &lt;a href="https://cloudonaut.io/ecs-vs-fargate-whats-the-difference/"&gt;ECS vs. Fargate: What's the difference?&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;p&gt;Purchase Savings Plans for Fargate. Read &lt;a href="https://cloudonaut.io/reduce-your-aws-bill-with-savings-plans/"&gt;Reduce your AWS bill with Savings Plans&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;p&gt;Use Fargate Spot for non-production workloads. Read &lt;a href="https://aws.amazon.com/blogs/aws/aws-fargate-spot-now-generally-available/"&gt;AWS Fargate Spot Now Generally Available&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  RDS
&lt;/h2&gt;

&lt;p&gt;Enable RDS Storage Auto Scaling instead of over-provisioning storage capacity.&lt;/p&gt;

&lt;p&gt;Consider switching to Aurora Serverless for unsteady. Check out &lt;a href="https://cloudonaut.io/review-amazon-aurora-serverless/"&gt;our review&lt;/a&gt; to learn about the pros and cons.&lt;/p&gt;

&lt;p&gt;And don't forget to verify that the instance type of your database still reflects the current workload. Also, check that the maximum I/O performance of the compute layer matches the storage layer.&lt;/p&gt;

&lt;p&gt;License costs for traditional database systems are tremendous. Migrating to an Open Source database should be on your long or short term TODO list.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB
&lt;/h2&gt;

&lt;p&gt;Switch to On-demand capacity mode for unsteady workloads. Read &lt;a href="https://cloudonaut.io/cost-savings-with-dynamodb-on-demand-essons-learned/"&gt;Cost savings with DynamoDB On-Demand: Lessons learned&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;p&gt;If on-demand capacity is not for you, use auto-scaling to adjust the provisioned capacity to the workload.&lt;/p&gt;

&lt;h2&gt;
  
  
  Elasticsearch
&lt;/h2&gt;

&lt;p&gt;Make use of Reserved Instances were planning one year is feasible.&lt;/p&gt;

&lt;p&gt;Evaluate UltraWarm tier (Preview) to retain large amounts of data at lower costs. Read &lt;a href="https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/ultrawarm.html"&gt;UltraWarm for Amazon Elasticsearch Service&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Route 53
&lt;/h2&gt;

&lt;p&gt;Increase TTL for records to reduce queries.&lt;/p&gt;

&lt;p&gt;Are you using Route 53 resolver endpoints? You typically pay $270 per month for endpoints in 3 AZs. Therefore, you might want to consolidate your resolver endpoints from multiple AWS accounts.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudFront
&lt;/h2&gt;

&lt;p&gt;Check the hit/miss ratio of the cache and adjust your configuration and TTL accordingly.&lt;/p&gt;

&lt;p&gt;Bypassing the CloudFront cache and loading assets directly from S3 is more expensive. Therefore, restrict access to S3 by using an Origin Access Identity.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudTrail
&lt;/h2&gt;

&lt;p&gt;Simple: delete unnecessary trails. Keep in mind that configuring more than one trail results in additional costs.&lt;/p&gt;

&lt;p&gt;Check costs for data events (S3 and Lambda). Read &lt;a href="https://cloudonaut.io/aws-cloudtrail-your-audit-log-is-incomplete/"&gt;AWS CloudTrail: your audit log is incomplete&lt;br&gt;
&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;p&gt;Now it is up to you. Go and reduce your AWS bill!&lt;/p&gt;

&lt;p&gt;One more thing: make sure you have created a budget alarm to get notified about unexpected costs in advance. Consult the &lt;a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-managing-costs.html"&gt;AWS documentation&lt;/a&gt; or learn how to &lt;a href="https://marbot.io/help/setup-integration-aws-budget-notification.html"&gt;receive AWS budget alarms via Slack&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Dockerizing Ruby on Rails</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Thu, 19 Dec 2019 15:59:45 +0000</pubDate>
      <link>https://dev.to/andreaswittig/dockerizing-ruby-on-rails-8f0</link>
      <guid>https://dev.to/andreaswittig/dockerizing-ruby-on-rails-8f0</guid>
      <description>&lt;p&gt;Did you dockerize your Ruby on Rails application already? You definitely should! Read on to learn why and how.&lt;/p&gt;

&lt;p&gt;Shipping software is a challenge. Endless installation instructions explain in detail how to install and configure an application as well as all its dependencies. But in practice, following installation instructions ends with frustration: the required version of Ruby is not available to install from the repository, the configuration file is located in another directory, the installation instructions do not cover the operating system you need or want to use, etc.&lt;/p&gt;

&lt;p&gt;And it gets worse: to be able to scale on-demand and recover from failure, we need to automate the installation and configuration of our application and its runtime environment. Implementing the required automation with the wrong tools is very time-consuming and error-prone.&lt;/p&gt;

&lt;p&gt;But what if you could bundle your application with all its dependencies and run it on any machine: your MacBook, your Windows PC, your test server, your on-premises server, and your cloud infrastructure? That's what Docker is all about.&lt;/p&gt;

&lt;p&gt;In short: Docker is a toolkit to deliver software.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This blog post is an excerpt from our book &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/" rel="noopener noreferrer"&gt;Rapid Docker on AWS&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The most important part of the Docker toolkit is the container. A container is an isolated runtime environment preventing an application from accessing resources from other applications running on the same operating system. The concept of a jail - later called a container - had been around on UNIX systems for years. Docker uses the same ideas but makes them a lot easier to use. &lt;/p&gt;

&lt;p&gt;With Docker containers, the differences between different platforms like your developer machine, your test system, and your production system are hidden under an abstraction layer. But how do you distribute your application with all its dependencies to multiple platforms? By creating a Docker image. A Docker image is similar to a virtual machine image, such as an Amazon Machine Image (AMI) that is used to launch an EC2 instance. The Docker image contains an operating system, the runtime environment, 3rd party libraries, and your application. The following figure illustrates how you can fetch an image and start a container on any platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fej66lg756qu7n28idp35.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fej66lg756qu7n28idp35.jpg" alt="Distribute your application among multiple machines with a Docker image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But how do you create a Docker image for your web application? By creating a script that builds the image step by step: a so-called &lt;code&gt;Dockerfile&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, you will learn how to dockerize a typical Ruby on Rails application.&lt;/p&gt;

&lt;p&gt;The project structure of a typical Ruby on Rails project looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Gemfile
├── README.md
├── Rakefile
├── app
├── babel.config.js
├── bin
├── config
├── config.ru
├── db
├── lib
├── log
├── package.json
├── postcss.config.js
├── public
├── storage
├── test
├── tmp
├── vendor
└── yarn.lock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How to bundle an application with the described project structure? Let's have a look at the &lt;code&gt;Dockerfile&lt;/code&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Based on Amazon Linux 2.&lt;/li&gt;
&lt;li&gt;Installs &lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt;, required by &lt;a href="https://yarnpkg.com/" rel="noopener noreferrer"&gt;yarn&lt;/a&gt;, required by Ruby on Rails.&lt;/li&gt;
&lt;li&gt;Installs &lt;a href="https://www.ruby-lang.org/" rel="noopener noreferrer"&gt;Ruby&lt;/a&gt; and &lt;a href="https://rubyonrails.org/" rel="noopener noreferrer"&gt;Ruby on Rails&lt;/a&gt; with all needed dependencies.&lt;/li&gt;
&lt;li&gt;Installs &lt;a href="https://github.com/vishnubob/wait-for-it" rel="noopener noreferrer"&gt;wait-for-it&lt;/a&gt; - a helper to make sure that the MySQL database is up and running before the Ruby container is started with &lt;code&gt;docker-compose&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Copies all files except the ignores defined in the file &lt;code&gt;.dockerignore&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Installs all Ruby dependencies of the application and generates the static assets.&lt;/li&gt;
&lt;li&gt;Configured a custom entry point that runs the database migrations when the container starts before the application is started.&lt;/li&gt;
&lt;li&gt;Exposes port 3000 and defines the default command to run the application.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; amazonlinux:2.0.20190508&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /usr/src/app
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app&lt;/span&gt;

&lt;span class="c"&gt;# Install Node.js (needed for yarn)&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;curl &lt;span class="nt"&gt;-sL&lt;/span&gt; https://rpm.nodesource.com/setup_10.x | bash -
&lt;span class="k"&gt;RUN &lt;/span&gt;yum &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;nodejs gcc-c++ make

&lt;span class="c"&gt;# Install Ruby &amp;amp; Rails&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;curl &lt;span class="nt"&gt;-sL&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/yum.repos.d/yarn.repo https://dl.yarnpkg.com/rpm/yarn.repo
&lt;span class="k"&gt;RUN &lt;/span&gt;amazon-linux-extras &lt;span class="nb"&gt;enable &lt;/span&gt;ruby2.6 &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; yum &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;git &lt;span class="nb"&gt;tar gzip &lt;/span&gt;yarn zlib-devel sqlite-devel mariadb-devel ruby-devel rubygems-devel rubygem-bundler rubygem-io-console rubygem-irb rubygem-json rubygem-minitest rubygem-net-http-persistent rubygem-net-telnet rubygem-power_assert rubygem-rake rubygem-test-unit rubygem-thor rubygem-xmlrpc &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; gem &lt;span class="nb"&gt;install &lt;/span&gt;rails

&lt;span class="c"&gt;# Install wait-for-it&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; docker/wait-for-it.sh /usr/local/bin/&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;u+x /usr/local/bin/wait-for-it.sh

&lt;span class="c"&gt;# Copy Ruby files (see .dockerignore)&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="c"&gt;# Install Ruby dependencies&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; RAILS_ENV production&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; RAILS_LOG_TO_STDOUT enabled&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; RAILS_SERVE_STATIC_FILES enabled&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;bin/bundle &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--deployment&lt;/span&gt; &lt;span class="nt"&gt;--without&lt;/span&gt; development &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;span class="c"&gt;# see https://github.com/rails/rails/issues/32947 for SECRET_KEY_BASE workaround&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nv"&gt;SECRET_KEY_BASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dummy bin/rails assets:precompile

&lt;span class="c"&gt;# Configure custom entrypoint to run migrations&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; docker/custom-entrypoint /usr/local/bin/&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;u+x /usr/local/bin/custom-entrypoint
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["custom-entrypoint"]&lt;/span&gt;

&lt;span class="c"&gt;# Expose port 3000 and start Rails server&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 3000&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["bin/rails", "server", "--binding=0.0.0.0"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To limit the amount of data that needs to be sent to Docker, the &lt;code&gt;.dockeringore&lt;/code&gt; file defines an exclude list for files and directories that you typically do not need to include in the Docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws/*
log/*
storage/*
tmp/*
public/assets/*
public/packs/*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It is a best practice when dockerizing applications to use environment variables instead of configuration files. Do not use files to store the configuration for your application. Use environment variables instead. Luckily, Ruby on Rails comes with it's &lt;a href="https://guides.rubyonrails.org/configuring.html" rel="noopener noreferrer"&gt;own configuration mechanism&lt;/a&gt; that supports environment variables out of the box. Check out the file &lt;code&gt;config/database.rb&lt;/code&gt; to see how environment variables are used to configure the database connection.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ...&lt;/span&gt;
&lt;span class="ss"&gt;production:
  &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;default&lt;/span&gt;
  &lt;span class="ss"&gt;host: &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sx"&gt;%= ENV['DATABASE_HOST'] %&amp;gt;
  database: &amp;lt;%=&lt;/span&gt; &lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'DATABASE_NAME'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;%&amp;gt;&lt;/span&gt;
  &lt;span class="ss"&gt;username: &lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sx"&gt;%= ENV['DATABASE_USER'] %&amp;gt;
  password: &amp;lt;%=&lt;/span&gt; &lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'DATABASE_PASSWORD'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;%&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One more thing: it is necessary to run the database migration each time you roll out a new version of your application. The easiest way to do so is to execute &lt;code&gt;db:migrate&lt;/code&gt; each time you start the Docker container. To do so, the &lt;code&gt;Dockerfile&lt;/code&gt; adds a so-called &lt;code&gt;ENTRYPOINT&lt;/code&gt; which references the shell script &lt;code&gt;custom-entrypoint&lt;/code&gt;. Each time you start the Docker container, the entry point script gets executed as well.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;custom-entrypoint&lt;/code&gt; script:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Waits until it is possible to establish a database connection.&lt;/li&gt;
&lt;li&gt;Executes the database migration by calling &lt;code&gt;db:migrate&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Starts the &lt;code&gt;puma&lt;/code&gt; web server.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;WAIT_FOR_IT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;wait-for-it.sh mysql:3306
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"running migrations"&lt;/span&gt;
bin/rails db:migrate

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"starting &lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! You are ready to build a Docker image bundling your Ruby on Rails application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; myapp:latest &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start a container based on the image ...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-p&lt;/span&gt; 3000:3000 myapp:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;... and open &lt;a href="http://localhost:9000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You have successfully dockerized your Ruby on Rails application. What is next?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Push the Docker image into a private registry (e.g., Amazon ECR).&lt;/li&gt;
&lt;li&gt;Deploy your application in the cloud (e.g., with ECS and Fargate).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Want to learn more about how to deploy your application on AWS? Check out our new book and online seminar &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/" rel="noopener noreferrer"&gt;Rapid Docker on AWS&lt;/a&gt;. Production-ready infrastructure templates and deployment pipeline included.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>rails</category>
    </item>
    <item>
      <title>How to dockerize your Node.js Express application for AWS Fargate?</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Mon, 09 Dec 2019 16:51:42 +0000</pubDate>
      <link>https://dev.to/andreaswittig/how-to-dockerize-your-node-js-express-application-for-aws-fargate-1268</link>
      <guid>https://dev.to/andreaswittig/how-to-dockerize-your-node-js-express-application-for-aws-fargate-1268</guid>
      <description>&lt;p&gt;My first project with &lt;a href="https://nodejs.org"&gt;Node.js&lt;/a&gt; - an asynchronous event-driven JavaScript runtime, designed to build scalable network applications - was building an online trading platform in 2013. Since then, Node.js is one of my favorite technologies. I will show you how to dockerize your Node.js application based on &lt;a href="https://expressjs.com/"&gt;Express&lt;/a&gt; - a fast, unopinionated, minimalist web framework - and run it on AWS Fargate in this blog bost. I like AWS Fargate because running containers in the cloud were never easier.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This blog post is an excerpt from our book &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/"&gt;Rapid Docker on AWS&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Read on to learn how to build a Docker image for a Node.js application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Docker image
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;Dockerfile&lt;/code&gt; is based on the &lt;a href="https://hub.docker.com/_/node/"&gt;official Node.js Docker Image&lt;/a&gt;: &lt;code&gt;node:10.16.2-stretch&lt;/code&gt;. Static files (folders &lt;code&gt;img&lt;/code&gt; and &lt;code&gt;css&lt;/code&gt;) are served by Express as well as the dynamic parts. The following details are required to understand the &lt;code&gt;Dockerfile&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;envsubst&lt;/code&gt; is used to generate the config file from environment variables&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;npm ci --only=production&lt;/code&gt; installs the dependencies declared in &lt;code&gt;package.json&lt;/code&gt; (&lt;code&gt;package-lock.json&lt;/code&gt;, to be more precise)&lt;/li&gt;
&lt;li&gt;The Express application listens on port 8080&lt;/li&gt;
&lt;li&gt;The Express application's entry point is &lt;code&gt;server.js&lt;/code&gt; and can be started with &lt;code&gt;node server.js&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A simple &lt;code&gt;server.js&lt;/code&gt; file follows. Yours likely is more complicated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/css&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;static&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;css&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/img&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;static&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;img&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/health-check&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sendStatus&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;0.0.0.0&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt; Most likely, your folder structure is different. Therefore, adapt the &lt;em&gt;Copy config files&lt;/em&gt; and &lt;em&gt;Copy Node.js files&lt;/em&gt; section in the following Dockerfile to your needs.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:10.16.2-stretch&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; NODE_ENV production&lt;/span&gt;

&lt;span class="c"&gt;# Install envsubst&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; gettext
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; docker/custom-entrypoint /usr/local/bin/&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;chmod &lt;/span&gt;u+x /usr/local/bin/custom-entrypoint
&lt;span class="k"&gt;ENTRYPOINT&lt;/span&gt;&lt;span class="s"&gt; ["custom-entrypoint"]&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; /usr/src/app/config/

&lt;span class="c"&gt;# Copy config files&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; config/*.tmp /tmp/config/&lt;/span&gt;

&lt;span class="c"&gt;# Install Node.js dependencies&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json /usr/src/app/&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci &lt;span class="nt"&gt;--only&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production

&lt;span class="c"&gt;# Copy Node.js files&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; css /usr/src/app/css&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; img /usr/src/app/img&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; views /usr/src/app/views&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; server.js /usr/src/app/&lt;/span&gt;

&lt;span class="c"&gt;# Expose port 8080 and start Node.js server&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "server.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The custom entrypoint is used to generate the config file from environment variables with &lt;code&gt;envsubst&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"generating configuration files"&lt;/span&gt;
&lt;span class="nv"&gt;FILES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/tmp/config/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;f &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$FILES&lt;/span&gt;
&lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nv"&gt;c&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="nv"&gt;$f&lt;/span&gt; .tmp&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"... &lt;/span&gt;&lt;span class="nv"&gt;$c&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  envsubst &amp;lt; &lt;span class="nv"&gt;$f&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /usr/src/app/config/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;c&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;done

&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"starting &lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, you will learn how to test your containers and application locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing locally
&lt;/h3&gt;

&lt;p&gt;Use &lt;a href="https://docs.docker.com/compose/"&gt;Docker Compose&lt;/a&gt; to run your application locally. The following &lt;code&gt;docker-compose.yml&lt;/code&gt; file configures Docker Compose and starts two containers: Node.js and a MySQL database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nodejs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;..'&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;docker/Dockerfile'&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8080:8080'&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mysql&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;DATABASE_HOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql&lt;/span&gt;
      &lt;span class="na"&gt;DATABASE_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;DATABASE_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;DATABASE_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret&lt;/span&gt;
  &lt;span class="na"&gt;mysql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;mysql:5.6'&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--default-authentication-plugin=mysql_native_password'&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3306:3306'&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The following command starts the application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker/docker-compose.yml up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Magically, Docker Compose will spin up three containers: NGINX, Django, and MySQL. Point your browser to &lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt; to check that your web application is up and running. The log files of all containers will show up in your terminal, which simplifies debugging a lot.&lt;/p&gt;

&lt;p&gt;After you have verified that your application is working correctly, cancel the running &lt;code&gt;docker-compose&lt;/code&gt; process by pressing &lt;code&gt;CTRL + C&lt;/code&gt;, and tear down the containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker/docker-compose.yml down
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying on AWS
&lt;/h2&gt;

&lt;p&gt;You are now ready to deploy your application on AWS.&lt;/p&gt;

&lt;p&gt;(1) Build Docker image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; nodejs-express:latest &lt;span class="nt"&gt;-f&lt;/span&gt; docker/Dockerfile &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;(2) Create ECR repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecr create-repository &lt;span class="nt"&gt;--repository-name&lt;/span&gt; nodejs-express &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'repository.repositoryUri'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;(3) Login to Docker registry (ECR):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ecr get-login &lt;span class="nt"&gt;--no-include-email&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;(4) Tag Docker image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker tag nodejs-express:latest &lt;span class="se"&gt;\&lt;/span&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/&lt;span class="se"&gt;\&lt;/span&gt;
nodejs-express:latest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;(5) Push Docker image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker push &lt;span class="se"&gt;\&lt;/span&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/&lt;span class="se"&gt;\&lt;/span&gt;
nodejs-express:latest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There is only one step missing: you need to spin up the cloud infrastructure.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use our &lt;a href="https://templates.cloudonaut.io/en/stable/fargate/#using-the-clusters-load-balancer-and-path-andor-host-based-routing"&gt;Free Templates for AWS CloudFormation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Use our &lt;a href="https://github.com/cfn-modules/docs/tree/master/examples/fargate-alb-proxy-pattern"&gt;cfn-modules&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Use the blueprint from our book &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/"&gt;Rapid Docker on AWS&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>docker</category>
    </item>
    <item>
      <title>Checklist: Is your application ready for a container cluster?</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Thu, 28 Nov 2019 21:16:41 +0000</pubDate>
      <link>https://dev.to/andreaswittig/checklist-is-your-application-ready-for-a-container-cluster-2a34</link>
      <guid>https://dev.to/andreaswittig/checklist-is-your-application-ready-for-a-container-cluster-2a34</guid>
      <description>&lt;p&gt;Is your application ready to run on a container cluster? Use this checklist to find out whether you are good to deploy your application on Amazon Elastic Container Service (ECS) and AWS Fargate or any other container cluster solution.&lt;/p&gt;

&lt;p&gt;Does your application fulfill the following five requirements?&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Stateless: avoid persisting data
&lt;/h2&gt;

&lt;p&gt;Your application (call it a microservice if you want to) is stateless. Answering a request or processing a job does not rely on reading data stored by previous requests or jobs. This applies to data from memory as well as a local disk.&lt;/p&gt;

&lt;p&gt;Instead of that, your application stores data in a SQL/NoSQL database (e.g., RDS or DynamoDB), an in-memory database (Elasticache), or any other fully-managed storage service (e.g., S3).&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Logging: write to stdout and stderr
&lt;/h2&gt;

&lt;p&gt;Your application writes log messages to standard output (&lt;code&gt;stdout&lt;/code&gt;) and standard error (&lt;code&gt;stderr&lt;/code&gt;). Do not write log messages to files. (see &lt;em&gt;Stateless&lt;/em&gt;). Docker has built-in support to ship log messages from &lt;code&gt;stdout&lt;/code&gt; and &lt;code&gt;stderr&lt;/code&gt; to various centralized logging solutions (e.g., CloudWatch Logs). Check out &lt;a href="https://cloudonaut.io/a-simple-way-to-manage-log-messages-from-containers-cloudwatch-logs/"&gt;A simple way to manage log messages from containers: CloudWatch Logs&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Configuration: use environment variables
&lt;/h2&gt;

&lt;p&gt;Your application reads configuration parameters from environment variables (e.g., the database endpoint or any other service endpoint). Do not use files to store the configuration for your application. &lt;/p&gt;

&lt;p&gt;Use a templating engine for configuration files if you are containerizing a legacy application. I prefer &lt;code&gt;envsubst&lt;/code&gt; to do so. Alternatively, you could have a look at &lt;a href="https://cloudonaut.io/dockerizing-legacy-applications-with-confd/"&gt;Dockerizing legacy applications with confd&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Process: restrict to one process
&lt;/h2&gt;

&lt;p&gt;Your container starts exactly one main process. If your application consists of more than one process, split them up into multiple containers. For example, if you run NGINX and PHP-FPM, create two containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Remote access: disable SSH
&lt;/h2&gt;

&lt;p&gt;Your container does not start an SSH daemon. Do not install or enable SSH within a container (see *Process*s). Use &lt;code&gt;docker attach&lt;/code&gt; to log into a container if needed for debugging. On top of that, optimize your logging.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Shutdown: avoid canceling requests and jobs
&lt;/h2&gt;

&lt;p&gt;Your application receives &lt;code&gt;KILL&lt;/code&gt; signals and shuts down gracefully. Test whether the KILL signal triggered by &lt;code&gt;docker kill&lt;/code&gt; leads to your application stopping to answer new requests or start new jobs and terminate after the last request or job has been completed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does your Dockerfile contain &lt;code&gt;ENTRYPOINT&lt;/code&gt; or &lt;code&gt;CMD&lt;/code&gt; in shell form? Your main process will not receive any &lt;code&gt;KILL&lt;/code&gt; signals.&lt;/li&gt;
&lt;li&gt;Are you starting your main process from a shell script? Make sure you are using &lt;code&gt;exec&lt;/code&gt; to do so.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When using Fargate it is necessary, that your application is able to shutdown gracefully within 2 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎉 Summary
&lt;/h2&gt;

&lt;p&gt;Checked all five requirements from the checklist? Happy you! Your application is ready for ECS and Fargate or any other container cluster solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  📚 eBook and Online Seminar
&lt;/h2&gt;

&lt;p&gt;Do you want to learn more about how to ship your application with Docker? Our ebook and online seminar &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/"&gt;Rapid Docker on AWS&lt;/a&gt; teaches you how to dockerize PHP, Ruby on Rails, Python Django, Java Spring Boot, and Node.js Express applications.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
    </item>
    <item>
      <title>How to dockerize your Python Django application for AWS Fargate?</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Tue, 19 Nov 2019 16:06:34 +0000</pubDate>
      <link>https://dev.to/andreaswittig/how-to-dockerize-your-python-django-application-for-aws-fargate-57da</link>
      <guid>https://dev.to/andreaswittig/how-to-dockerize-your-python-django-application-for-aws-fargate-57da</guid>
      <description>&lt;p&gt;The biggest game-changer for Docker on AWS was the announcement of AWS Fargate. Operating Docker containers could not be easier. With AWS Fargate, you launch Docker containers in the cloud without any need to manage virtual machines. &lt;a href="https://www.djangoproject.com/"&gt;Django&lt;/a&gt; is a popular &lt;a href="https://www.python.org/"&gt;Python&lt;/a&gt; web framework that encourages rapid development and clean, pragmatic design.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This blog post is an excerpt from our book &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/"&gt;Rapid Docker on AWS&lt;/a&gt; and was first published on &lt;a href="https://cloudonaut.io/how-to-dockerize-your-python-django-application-for-aws-fargate/"&gt;cloudonaut.io&lt;/a&gt;. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The following post describes how you can dockerize your Python Django application and run it on AWS Fargate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building the Docker images
&lt;/h3&gt;

&lt;p&gt;Two &lt;a href="https://docs.docker.com/v17.09/engine/userguide/storagedriver/imagesandcontainers/"&gt;Docker images&lt;/a&gt; are needed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NGINX to serve static files and proxy to Django&lt;/li&gt;
&lt;li&gt;Python Django application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, you will learn how to build NGINX image. The &lt;code&gt;Dockerfile&lt;/code&gt; makes use of &lt;a href="https://docs.docker.com/develop/develop-images/multistage-build/"&gt;multi-stage builds&lt;/a&gt;. You can use more than one &lt;code&gt;FROM&lt;/code&gt; statement in your Dockerfile, as shown in the following example.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Static assets are generated in a Python stage

&lt;ol&gt;
&lt;li&gt;Based on the official &lt;a href="https://hub.docker.com/_/python"&gt;python&lt;/a&gt; image&lt;/li&gt;
&lt;li&gt;Using &lt;code&gt;pip3&lt;/code&gt; to install the Python dependencies&lt;/li&gt;
&lt;li&gt;Copying the app&lt;/li&gt;
&lt;li&gt;Generating the static assets with &lt;code&gt;python3 manage.py collectstatic&lt;/code&gt; (output goes to the &lt;code&gt;assets&lt;/code&gt; folder)&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;The static assets are copied (using &lt;code&gt;COPY --from=build&lt;/code&gt;) into the NGINX stage which produces the final Docker image

&lt;ol&gt;
&lt;li&gt;Based on the official &lt;a href="https://hub.docker.com/_/nginx"&gt;nginx&lt;/a&gt; image&lt;/li&gt;
&lt;li&gt;Copying the &lt;code&gt;static/&lt;/code&gt; folder from the previous stage&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt; Most likely, your folder structure is different. Therefore, adapt the &lt;em&gt;Copy Python files&lt;/em&gt; section in the following Dockerfile to your needs.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Static Assets
FROM python:3.7.4 AS build

WORKDIR /usr/src/app

# Install Python dependencies
COPY requirements.txt /usr/src/app/
RUN pip3 install -r requirements.txt

# Copy Python files
COPY example /usr/src/app/example # MODIFY THIS LINE: YOUR FOLDERS ARE DIFFERENT
COPY rapid /usr/src/app/rapid # MODIFY OR REMOVE THIS LINE: YOUR FOLDERS ARE DIFFERENT
COPY manage.py /usr/src/app/

# Build static assets
RUN SECRET_KEY=secret python3 manage.py collectstatic


# NGINX
FROM nginx:1.14

# Configure NGINX
COPY docker/nginx/default.conf /etc/nginx/conf.d/default.conf

# Copy static files
COPY --from=build /usr/src/app/build/ /var/www/html/static
RUN chown -R nginx:nginx /var/www/html
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The NGINX configuration file forwards requests to the Python container if the path does not start with &lt;code&gt;/static/&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen       80;
    server_name  localhost;
    root         /var/www/html;

    location ~ ^/static/ {
      # serve from NGINX
    }

    location / {
      # pass to Python gunicorn based on
      # http://docs.gunicorn.org/en/stable/deploy.html
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
      proxy_set_header Host $http_host;
      # we don't want nginx trying to do something clever with
      # redirects, we set the Host: header above already.
      proxy_redirect off;
      proxy_pass http://127.0.0.1:8000;
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, you will learn how to create the Django imge. The &lt;code&gt;Dockerfile&lt;/code&gt; is based on the official &lt;a href="https://hub.docker.com/_/python"&gt;python&lt;/a&gt; image with the following additions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/vishnubob/wait-for-it"&gt;wait-for-it&lt;/a&gt; is installed to wait for the MySQL database container if you test locally.&lt;/li&gt;
&lt;li&gt;Python dependencies are installed with &lt;code&gt;pip3 install&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime"&gt;custom etrypoint&lt;/a&gt; is defined to run commands before the Django app starts (read on to learn more).&lt;/li&gt;
&lt;li&gt; &lt;a href="https://gunicorn.org/"&gt;gunicorn&lt;/a&gt; runs the app.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt; Most likely, your folder structure is different. Therefore, adapt the &lt;em&gt;Copy Python files&lt;/em&gt; section in the following Dockerfile to your needs.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.7.4

WORKDIR /usr/src/app

# Install wait-for-it
COPY docker/wait-for-it.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/wait-for-it.sh

# Install Python dependencies
COPY requirements.txt /usr/src/app/
RUN pip3 install -r requirements.txt

# Copy Python files
COPY example /usr/src/app/example # MODIFY THIS LINE: YOUR FOLDERS ARE DIFFERENT
COPY rapid /usr/src/app/rapid # MODIFY THIS LINE: YOUR FOLDERS ARE DIFFERENT
COPY manage.py /usr/src/app/

# Configure custom entrypoint to run migrations
COPY docker/python/custom-entrypoint /usr/local/bin/
RUN chmod u+x /usr/local/bin/custom-entrypoint
ENTRYPOINT ["custom-entrypoint"]

# Expose port 8000 and start Python server
EXPOSE 8000
CMD ["gunicorn", "-b", "0.0.0.0", "-w", "2", "rapid.wsgi"]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt; The &lt;code&gt;-w&lt;/code&gt; parameter of gunicorn defines the number of &lt;a href="http://docs.gunicorn.org/en/stable/settings.html#workers"&gt;workers&lt;/a&gt; and should be in the range of &lt;code&gt;2-4 x $(NUM_CORES)&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The custom entrypoint is used to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wait for the MySQL container if the &lt;code&gt;WAIT_FOR_IT&lt;/code&gt; environment variable is set (used for testing locally only).&lt;/li&gt;
&lt;li&gt;Run the database migrations before the Django app is started.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
set -e

if [ -n "${WAIT_FOR_IT}" ]; then
  wait-for-it.sh mysql:3306
fi

echo "running migrations"
python3 manage.py migrate

echo "starting $@"
exec "$@"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That's it. You have everything you need to build both, the NGINX as well as the Django image. Next, you will learn how to test your containers and application locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing locally
&lt;/h3&gt;

&lt;p&gt;Use &lt;a href="https://docs.docker.com/compose/"&gt;Docker Compose&lt;/a&gt; to run your application locally. The following &lt;code&gt;docker-compose.yml&lt;/code&gt; file configures Docker Compose and starts three containers: NGINX, Django as well as a MySQL database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3'
services:
  nginx:
    build:
      context: '..'
      dockerfile: 'docker/nginx/Dockerfile'
    depends_on:
    - python
    network_mode: 'service:python' # use network interface of python container to simulate awsvpc network mode
  python:
    build:
      context: '..'
      dockerfile: 'docker/python/Dockerfile'
    ports:
    - '8080:80' # forwards port of nginx container
    depends_on:
    - mysql
    environment:
      DATABASE_HOST: mysql
      DATABASE_NAME: app
      DATABASE_USER: app
      DATABASE_PASSWORD: secret
      SECRET_KEY: secret
      WAIT_FOR_IT: 'true'
  mysql:
    image: 'mysql:5.6'
    command: '--default-authentication-plugin=mysql_native_password'
    ports:
    - '3306:3306'
    environment:
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: app
      MYSQL_USER: app
      MYSQL_PASSWORD: secret
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The following command starts the application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker/docker-compose.yml up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Magically, Docker Compose will spin up three containers: NGINX, Django, and MySQL. Point your browser to &lt;a href="http://localhost:8080"&gt;http://localhost:8080&lt;/a&gt; to check that your web application is up and running. The log files of all containers will show up in your terminal, which simplifies debugging a lot.&lt;/p&gt;

&lt;p&gt;After you have verified that your application is working correctly, cancel the running &lt;code&gt;docker-compose&lt;/code&gt; process by pressing &lt;code&gt;CTRL + C&lt;/code&gt;, and tear down the containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker/docker-compose.yml down
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploying on AWS
&lt;/h3&gt;

&lt;p&gt;You are now ready to deploy your application on AWS.&lt;/p&gt;

&lt;p&gt;(1) Build Docker images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; python-django-nginx:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-f&lt;/span&gt; docker/nginx/Dockerfile &lt;span class="nb"&gt;.&lt;/span&gt;
docker build &lt;span class="nt"&gt;-t&lt;/span&gt; python-django-python:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-f&lt;/span&gt; docker/python/Dockerfile &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;(2) Create ECR repositories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ecr create-repository &lt;span class="nt"&gt;--repository-name&lt;/span&gt; python-django-nginx &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'repository.repositoryUri'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; text
aws ecr create-repository &lt;span class="nt"&gt;--repository-name&lt;/span&gt; python-django-python &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'repository.repositoryUri'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;(3) Login to Docker registry (ECR):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="si"&gt;$(&lt;/span&gt;aws ecr get-login &lt;span class="nt"&gt;--no-include-email&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;(4) Tag Docker images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker tag python-django-nginx:latest &lt;span class="se"&gt;\&lt;/span&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/&lt;span class="se"&gt;\&lt;/span&gt;
python-django-nginx:latest
docker tag python-django-python:latest &lt;span class="se"&gt;\&lt;/span&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/&lt;span class="se"&gt;\&lt;/span&gt;
python-django-python:latest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;(5) Push Docker images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker push &lt;span class="se"&gt;\&lt;/span&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/&lt;span class="se"&gt;\&lt;/span&gt;
python-django-nginx:latest
docker push &lt;span class="se"&gt;\&lt;/span&gt;
111111111111.dkr.ecr.eu-west-1.amazonaws.com/&lt;span class="se"&gt;\&lt;/span&gt;
python-django-python:latest
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There is only one step missing: you need to spin up the cloud infrastructure.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use our &lt;a href="https://templates.cloudonaut.io/en/stable/fargate/#using-the-clusters-load-balancer-and-path-andor-host-based-routing"&gt;Free Templates for AWS CloudFormation&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Use our &lt;a href="https://github.com/cfn-modules/docs/tree/master/examples/fargate-alb-proxy-pattern"&gt;cfn-modules&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Use the blueprint from our book &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/"&gt;Rapid Docker on AWS&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>python</category>
      <category>docker</category>
      <category>aws</category>
    </item>
    <item>
      <title>Rapid Docker on AWS: How to monitor the application?</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Fri, 08 Nov 2019 19:33:51 +0000</pubDate>
      <link>https://dev.to/andreaswittig/rapid-docker-on-aws-how-to-monitor-the-application-4d7c</link>
      <guid>https://dev.to/andreaswittig/rapid-docker-on-aws-how-to-monitor-the-application-4d7c</guid>
      <description>&lt;p&gt;Gaining insights into your infrastructure and application is crucial for debugging issues and avoiding failures. The following figure shows that all parts of the infrastructure are sending metrics to a service called Amazon CloudWatch. The metrics include CPU utilization of your containers, the number of database connections, the number of HTTP 5XX errors caused by the load balancer, and many more. Alarms monitor relevant metrics and send notifications to SNS (Amazon Simple Notification Service). SNS delivers the alarm notifications to you via email or other destinations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--omthI93y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ial1dhx0ki47rcz8kn9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--omthI93y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ial1dhx0ki47rcz8kn9v.png" alt="Monitoring with CloudWatch: metrics and alarms"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following alarms make sure you are the first to know when your web application is not working as expected:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTPCodeELB5XXTooHighAlarm&lt;/strong&gt;: An HTTP request was answered with a 5XX status code (server-side error) by the load balancer, most likely because there was no healthy container running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TargetConnectionErrorCountTooHighAlarm&lt;/strong&gt;: The load balancer was not able to establish a connection to one of the containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HTTPCodeTarget5XXTooHighAlarm&lt;/strong&gt;: Your application responded with a 5XX status code (server-side error).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RejectedConnectionCountTooHighAlarm&lt;/strong&gt;: A client connection to the load balancer was rejected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CPUUtilizationTooHighAlarm&lt;/strong&gt;: The CPU utilization of your containers is too high.&lt;/p&gt;

&lt;p&gt;Our goal is to send alarm notifications only when the user experience is affected. However, you might want to get an overview of the state and utilization of your infrastructure from time to time. Use the predefined CloudWatch dashboard as illustrated in the screenshot below to do so. The dashboard includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ALB Errors&lt;/strong&gt;: the 4XX (client-side) and 5XX (server-side) error rate of your web application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ALB Requests + Latency&lt;/strong&gt;: the number of requests, as well as the latency (99 and 95 percentiles).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ECS Utilization&lt;/strong&gt;: the CPU and memory utilization of your containers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RDS IOPS + Capacity&lt;/strong&gt;: the capacity of your database, as well as the I/O throughput (IOPS).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RDS Latency&lt;/strong&gt;: the latency of &lt;code&gt;SELECT&lt;/code&gt;, &lt;code&gt;INSERT&lt;/code&gt;, &lt;code&gt;UPDATE&lt;/code&gt;, and &lt;code&gt;DELETE&lt;/code&gt; statements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RDS Queries&lt;/strong&gt;: the number of &lt;code&gt;SELECT&lt;/code&gt;, &lt;code&gt;INSERT&lt;/code&gt;, &lt;code&gt;UPDATE&lt;/code&gt;, and &lt;code&gt;DELETE&lt;/code&gt; statements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--To9jxHA6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/iwpkuon49ltv7lb1f12x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--To9jxHA6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/iwpkuon49ltv7lb1f12x.png" alt="Screenshot of the CloudWatch dashboard monitoring the whole infrastructure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Execute the following command from your working environment to fetch the dashboard's URL.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;aws cloudformation describe-stacks &lt;span class="nt"&gt;--stack-name&lt;/span&gt; rapid-docker-on-aws &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'Stacks[0].Outputs[?OutputKey==`DashboardUrl`].OutputValue'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In summary, metrics and alarms allow you to monitor your cloud infrastructure and web application. One important piece for debugging is missing: log messages. As shown in the following figure, whenever your application writes a message to standard output (stdout) and standard error (stderr), these messages are automatically pushed to a log group in CloudWatch Logs. The log group collects all the log messages and stores them for 14 days. On top of that, you can search and analyze the log messages for debugging purposes whenever needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wP7J34YK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/msfeafojeaq7doqa8rpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wP7J34YK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/msfeafojeaq7doqa8rpq.png" alt="Centralized log management with CloudWach Logs"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Execute the following command from your working environment to fetch the URL pointing to CloudWatch Logs Insights.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;aws cloudformation describe-stacks &lt;span class="nt"&gt;--stack-name&lt;/span&gt; rapid-docker-on-aws &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'Stacks[0].Outputs[?OutputKey==`AppLogsUrl`].OutputValue'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The following screenshot shows CloudWatch Logs Insights in action:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mCXsL6QH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/65qyhv8uf3u0oevyi4hy.pngg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mCXsL6QH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/65qyhv8uf3u0oevyi4hy.pngg" alt="Screenshot CloudWatch Logs in Action"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To search through your logs, you need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modify the query.&lt;/li&gt;
&lt;li&gt;Select a time span. &lt;/li&gt;
&lt;li&gt;Hit the &lt;strong&gt;Run query&lt;/strong&gt; button.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Warning&lt;/strong&gt; Choose a time span as short as possible to avoid unnecessary costs when searching the log messages.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let's start with a simple query filtering all log messages from the proxy container (NGINX):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fields @timestamp, @message
| sort @timestamp desc
| filter @logStream like 'proxy/'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And a similar query to filter all log messages from the app container (e.g., PHP-FPM):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fields @timestamp, @message
| sort @timestamp desc
| filter @logStream like 'app/'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This more advanced query filters all log messages from the proxy container (e.g., NGINX) containing the search term “404” in the log message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fields @timestamp, @message
| sort @timestamp desc
| filter @logStream like 'proxy/' AND @message like '404'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Another example: this query filters all log messages from the app container (e.g., PHP-FPM) containing the search term “error” in the log message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fields @timestamp, @message
| sort @timestamp desc
| filter @logStream like 'app/' AND @message like 'error'
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Want to learn more about the query syntax? Check out the &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html"&gt;CloudWatch Logs Insights Query Syntax&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In summary, with CloudWatch metrics, alarms, and logs, monitoring and debugging your web application is simple.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Do you have any questions? Please leave them in the comments. This is the last post of a series. Follow me to make sure you are not missing the following posts.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2dQQw8Qh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jtqtjyonsdo06chhq4cg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2dQQw8Qh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jtqtjyonsdo06chhq4cg.jpg" alt="Rapid Docker on AWS"&gt;&lt;/a&gt;&lt;br&gt;
This post is an excerpt from our new book &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/?utm_source=devto&amp;amp;utm_medium=post&amp;amp;utm_campaign=rapid-docker-on-aws-series"&gt;Rapid Docker on AWS&lt;/a&gt;. The book includes code samples for PHP, Ruby (Rails), Python (Django), Java (Spring Boot), and Node.js (Express).&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>devops</category>
      <category>sre</category>
    </item>
    <item>
      <title>Rapid Docker on AWS: How to set up the AWS infrastructure?</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Thu, 07 Nov 2019 19:17:51 +0000</pubDate>
      <link>https://dev.to/andreaswittig/rapid-docker-on-aws-how-to-set-up-the-aws-infrastructure-211f</link>
      <guid>https://dev.to/andreaswittig/rapid-docker-on-aws-how-to-set-up-the-aws-infrastructure-211f</guid>
      <description>&lt;p&gt;The secret sauce allowing you to set up your cloud infrastructure within minutes, which consists of around 100 resources, is Infrastructure as Code. It removes the need for clicking through the AWS Management Console manually.&lt;/p&gt;

&lt;p&gt;In our opinion, Infrastructure as Code works best when following the declarative approach. Define the target state in source code and use a tool which calculates and executes the needed steps to transform the current state into the target state. Our tool of choice is AWS CloudFormation.&lt;/p&gt;

&lt;p&gt;The figure below explains the key concepts of CloudFormation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You create a JSON or YAML file describing the target state of your infrastructure, which CloudFormation calls a &lt;strong&gt;template&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;You upload your template and ask CloudFormation to create, update or delete your infrastructure. The state of your infrastructure is stored within a so-called &lt;strong&gt;stack&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;CloudFormation transforms the current state of your stack into the target state defined in your template. To do so, CloudFormation creates, updates, or deletes &lt;strong&gt;resources&lt;/strong&gt; as needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p-yKufrg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/dtd91syrpw8floqxfw6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p-yKufrg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/dtd91syrpw8floqxfw6i.png" alt="Infrastructure as Code with CloudFormation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, install the CloudFormation modules. To do so, create a &lt;code&gt;package.json&lt;/code&gt; file with the following content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "rapid-docker-on-aws-rapid-docker-on-aws",
  "version": "1.0.0",
  "description": "Rapid Docker on AWS: Demo",
  "author": "Michael Wittig &amp;lt;michael@widdix.de&amp;gt;",
  "license": "Apache-2.0",
  "private": true,
  "dependencies": {
    "@cfn-modules/alb": "1.0.4",
    "@cfn-modules/alb-listener": "1.0.0",
    "@cfn-modules/alerting": "1.2.0",
    "@cfn-modules/client-sg": "1.0.0",
    "@cfn-modules/cloudwatch-dashboard": "1.2.0",
    "@cfn-modules/ecs-cluster": "1.1.0",
    "@cfn-modules/ecs-alb-target": "1.2.0",
    "@cfn-modules/fargate-service": "2.5.0",
    "@cfn-modules/kms-key": "1.2.0",
    "@cfn-modules/rds-aurora-serverless": "1.5.0",
    "@cfn-modules/secret": "1.3.0",
    "@cfn-modules/vpc": "1.1.1"
  }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, create the CloudFormation template. To do so, create a &lt;code&gt;template.yml&lt;/code&gt; file with the following content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Rapid Docker on AWS: Demo application'
Parameters:
  AppImage:
    Description: 'The Docker image to use for the app container.'
    Type: String
    Default: 'cloudonaut/docker-on-aws-rapid-docker-on-aws-php-fpm:latest'
  ProxyImage:
    Description: 'Docker image to use for the proxy container.'
    Type: String
    Default: 'cloudonaut/docker-on-aws-rapid-docker-on-aws-nginx:latest'
  AdminEmail:
    Description: 'Optional email address of the administrator.'
    Type: String
    Default: ''
Conditions:
  HasAdminEmail: !Not [!Equals ['', !Ref AdminEmail]]
Resources:
  Alerting:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        Email: !If [HasAdminEmail, !Ref AdminEmail, !Ref 'AWS::NoValue']
      TemplateURL: './node_modules/@cfn-modules/alerting/module.yml'
  Dashboard:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        DashboardName: !Ref 'AWS::StackName'
        AlbModule: !GetAtt 'Alb.Outputs.StackName'
        FargateServiceModule: !GetAtt 'AppService.Outputs.StackName'
        RdsAuroraServerlessModule: !GetAtt 'AuroraServerlessCluster.Outputs.StackName'
      TemplateURL: './node_modules/@cfn-modules/cloudwatch-dashboard/module.yml'
  Key:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        AlertingModule: !GetAtt 'Alerting.Outputs.StackName'
      TemplateURL: './node_modules/@cfn-modules/kms-key/module.yml'
  DatabaseSecret:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        KmsKeyModule: !GetAtt 'Key.Outputs.StackName'
        Description: !Sub '${AWS::StackName}: database password'
      TemplateURL: './node_modules/@cfn-modules/secret/module.yml'
  Vpc:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        AlertingModule: !GetAtt 'Alerting.Outputs.StackName'
        NatGateways: 'false' # reduce costs
      TemplateURL: './node_modules/@cfn-modules/vpc/module.yml'
  AuroraServerlessClientSg:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        VpcModule: !GetAtt 'Vpc.Outputs.StackName'
      TemplateURL: './node_modules/@cfn-modules/client-sg/module.yml'
  AuroraServerlessCluster:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        VpcModule: !GetAtt 'Vpc.Outputs.StackName'
        ClientSgModule: !GetAtt 'AuroraServerlessClientSg.Outputs.StackName'
        KmsKeyModule: !GetAtt 'Key.Outputs.StackName'
        AlertingModule: !GetAtt 'Alerting.Outputs.StackName'
        SecretModule: !GetAtt 'DatabaseSecret.Outputs.StackName'
        DBName: test
        DBMasterUsername: master
        SecondsUntilAutoPause: '900'
        MinCapacity: '1'
        MaxCapacity: '2'
        EngineVersion: '5.6.10a'
      TemplateURL: './node_modules/@cfn-modules/rds-aurora-serverless/module.yml'
  Alb:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        VpcModule: !GetAtt 'Vpc.Outputs.StackName'
        AlertingModule: !GetAtt 'Alerting.Outputs.StackName'
      TemplateURL: './node_modules/@cfn-modules/alb/module.yml'
  AlbListenerHttp:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        AlbModule: !GetAtt 'Alb.Outputs.StackName'
        Port: '80'
      TemplateURL: './node_modules/@cfn-modules/alb-listener/module.yml'
  AppTarget:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        AlbModule: !GetAtt 'Alb.Outputs.StackName'
        AlbListenerModule: !GetAtt 'AlbListenerHttp.Outputs.StackName'
        VpcModule: !GetAtt 'Vpc.Outputs.StackName'
        AlertingModule: !GetAtt 'Alerting.Outputs.StackName'
        Priority: '2'
        HealthCheckPath: '/health-check.php'
      TemplateURL: './node_modules/@cfn-modules/ecs-alb-target/module.yml'
  Cluster:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      TemplateURL: './node_modules/@cfn-modules/ecs-cluster/module.yml'
  AppService:
    Type: 'AWS::CloudFormation::Stack'
    Properties:
      Parameters:
        VpcModule: !GetAtt 'Vpc.Outputs.StackName'
        ClusterModule: !GetAtt 'Cluster.Outputs.StackName'
        TargetModule: !GetAtt 'AppTarget.Outputs.StackName'
        AlertingModule: !GetAtt 'Alerting.Outputs.StackName'
        ClientSgModule1: !GetAtt 'AuroraServerlessClientSg.Outputs.StackName'
        ProxyImage: !Ref ProxyImage
        ProxyPort: '80'
        AppImage: !Ref AppImage
        AppPort: '9000'
        AppEnvironment1Key: 'DATABASE_PASSWORD'
        AppEnvironment1SecretModule: !GetAtt 'DatabaseSecret.Outputs.StackName'
        AppEnvironment2Key: 'DATABASE_HOST'
        AppEnvironment2Value: !GetAtt 'AuroraServerlessCluster.Outputs.DnsName'
        AppEnvironment3Key: 'DATABASE_NAME'
        AppEnvironment3Value: 'test'
        AppEnvironment4Key: 'DATABASE_USER'
        AppEnvironment4Value: 'master'
        Cpu: '0.25'
        Memory: '0.5'
        DesiredCount: '2'
        MaxCapacity: '4'
        MinCapacity: '2'
        LogsRetentionInDays: '14'
      TemplateURL: './node_modules/@cfn-modules/fargate-service/module.yml'
Outputs:
  Url:
    Value: !Sub 'http://${Alb.Outputs.DnsName}/'
  AlbDnsName:
    Value: !GetAtt 'Alb.Outputs.DnsName'
  DashboardUrl:
    Value: !Sub 'https://${AWS::Region}.console.aws.amazon.com/cloudwatch/home?region=${AWS::Region}#dashboards:name=${AWS::StackName}'
  AppLogsUrl:
    Value: !Sub "https://${AWS::Region}.console.aws.amazon.com/cloudwatch/home?region=${AWS::Region}#logs-insights:queryDetail=~(source~(~'${AppService.Outputs.LogGroupName}))"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Finally, you will deploy the demo web application into your AWS account.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Warning&lt;/strong&gt; Costs arise when you launch the demo infrastructure and application. You can expect costs of around $4.50 per day.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In your temporary working environment (replace &lt;code&gt;$NICKNAME&lt;/code&gt;), execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i
aws s3 mb s3://rapid-docker-on-aws-&lt;span class="nv"&gt;$NICKNAME&lt;/span&gt;
aws cloudformation package &lt;span class="nt"&gt;--template-file&lt;/span&gt; template.yml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--s3-bucket&lt;/span&gt; rapid-docker-on-aws-&lt;span class="nv"&gt;$NICKNAME&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output-template-file&lt;/span&gt; .template.yml
aws cloudformation deploy &lt;span class="nt"&gt;--template-file&lt;/span&gt; .template.yml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--stack-name&lt;/span&gt; rapid-docker-on-aws &lt;span class="nt"&gt;--capabilities&lt;/span&gt; CAPABILITY_IAM
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The last command takes around 20 minutes to complete with an output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - rapid-docker-on-aws
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The AWS resources are created. The following command fetches the URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;aws cloudformation describe-stacks &lt;span class="nt"&gt;--stack-name&lt;/span&gt; rapid-docker-on-aws &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'Stacks[0].Outputs[?OutputKey==`Url`].OutputValue'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--output&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The output looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://demo-LoadB-[...].elb.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Open the URL in your web browser. Remember all the benefits of the Rapid Docker on AWS architecture. Isn't it amazing how easy you can launch this architecture?&lt;/p&gt;

&lt;p&gt;It's time to delete the running demo web application to avoid future costs. Execute the following commands (replace &lt;code&gt;$NICKNAME&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;aws cloudformation delete-stack &lt;span class="nt"&gt;--stack-name&lt;/span&gt; rapid-docker-on-aws
aws s3 rb &lt;span class="nt"&gt;--force&lt;/span&gt; s3://rapid-docker-on-aws-&lt;span class="nv"&gt;$NICKNAME&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Do you have any questions? Please leave them in the comments. This is the 4th post of a series. Follow me to make sure you are not missing the following posts.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EBtadm23--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/edqm6fcngiasifd8tmj7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EBtadm23--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/edqm6fcngiasifd8tmj7.jpg" alt="Rapid Docker on AWS"&gt;&lt;/a&gt;&lt;br&gt;
This post is an excerpt from our new book &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/?utm_source=devto&amp;amp;utm_medium=post&amp;amp;utm_campaign=rapid-docker-on-aws-series"&gt;Rapid Docker on AWS&lt;/a&gt;. The book includes code samples for PHP, Ruby (Rails), Python (Django), Java (Spring Boot), and Node.js (Express).&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Rapid Docker on AWS: How to test locally?</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Wed, 06 Nov 2019 20:03:15 +0000</pubDate>
      <link>https://dev.to/andreaswittig/rapid-docker-on-aws-how-to-test-locally-4c76</link>
      <guid>https://dev.to/andreaswittig/rapid-docker-on-aws-how-to-test-locally-4c76</guid>
      <description>&lt;p&gt;As promised one of the benefits of Docker is that you can test your application locally. To do so, you need to spin up three containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NGINX&lt;/li&gt;
&lt;li&gt;PHP-FPM&lt;/li&gt;
&lt;li&gt;MySQL (which will be replaced by Amazon Aurora when deploying on AWS)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Theoretically, you can create the needed containers manually with &lt;code&gt;docker run&lt;/code&gt; but doing so is a little cumbersome. You will use Docker Compose, a tool for running multi-container applications, instead.&lt;/p&gt;

&lt;p&gt;All you need to do is to create a Docker Compose file &lt;code&gt;docker-compose.yml&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt; You probably need to add your own environment variables for the PHP container (see inline comments).&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3'&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;nginx&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;..'&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;docker/nginx/Dockerfile'&lt;/span&gt; &lt;span class="c1"&gt;# build your NGINX image&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;php&lt;/span&gt;
    &lt;span class="na"&gt;network_mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;service:php'&lt;/span&gt; &lt;span class="c1"&gt;# use network interface of php container to simulate awsvpc network mode&lt;/span&gt;
  &lt;span class="na"&gt;php&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;..'&lt;/span&gt;
      &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;docker/php-fpm/Dockerfile'&lt;/span&gt; &lt;span class="c1"&gt;# build PHP image&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;8080:80'&lt;/span&gt; &lt;span class="c1"&gt;# forwards port of nginx container&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mysql&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# add your own variables used by envsubst here&lt;/span&gt;
      &lt;span class="na"&gt;DATABASE_HOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql&lt;/span&gt;
      &lt;span class="na"&gt;DATABASE_NAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;DATABASE_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
      &lt;span class="na"&gt;DATABASE_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret&lt;/span&gt;
  &lt;span class="na"&gt;mysql&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;mysql:5.6'&lt;/span&gt; &lt;span class="c1"&gt;# matches the Amazon Aurora MySQL version&lt;/span&gt;
    &lt;span class="s"&gt;command:'--default-authentication-plugin=mysql_native_password'&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3306:3306'&lt;/span&gt; &lt;span class="c1"&gt;# forwards port 3306 to 3306 on your machine&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_ROOT_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret&lt;/span&gt; &lt;span class="c1"&gt;# password for root user&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt; &lt;span class="c1"&gt;# create database with name app&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt; &lt;span class="c1"&gt;# user app is granted full access to db app&lt;/span&gt;
      &lt;span class="na"&gt;MYSQL_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;secret&lt;/span&gt; &lt;span class="c1"&gt;# the password for user app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;From within your temporary working environment execute the following command to spin up the containers on your machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker/docker-compose.yml up
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Magically, Docker Compose will spin up three containers. Point your browser to &lt;a href="http://localhost:8080/index.php"&gt;http://localhost:8080/index.php&lt;/a&gt; to check that your web application is up and running. The log files of all containers will show up in your terminal which simplifies debugging a lot.&lt;/p&gt;

&lt;p&gt;If you need to make a change to your setup, please cancel the running &lt;code&gt;docker-compose&lt;/code&gt; process &lt;code&gt;CTRL + C&lt;/code&gt; and restart with the following command afterwards to make sure the images get re-built.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker/docker-compose.yml up &lt;span class="nt"&gt;--build&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Use your favorite MySQL client and connect to &lt;code&gt;localhost:3306&lt;/code&gt; with username &lt;code&gt;root&lt;/code&gt; and password &lt;code&gt;secret&lt;/code&gt; if you need to create a schema or restore a database dump.&lt;/p&gt;

&lt;p&gt;After you have verified that your application is working correctly, cancel the running &lt;code&gt;docker-compose&lt;/code&gt; process by pressing &lt;code&gt;CTRL + C&lt;/code&gt;, and tear down the containers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose &lt;span class="nt"&gt;-f&lt;/span&gt; docker/docker-compose.yml down
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It's time to deploy your web application on AWS. You will learn how to to so in the next part of this series.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Do you have any questions? Please leave them in the comments. This is the 3rd post of a series. Follow me to make sure you are not missing the following posts.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gWppb8zh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/i7qhk56yovop6xw5rht4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gWppb8zh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/i7qhk56yovop6xw5rht4.jpg" alt="Rapid Docker on AWS"&gt;&lt;/a&gt;&lt;br&gt;
This post is an excerpt from our new book &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/?utm_source=devto&amp;amp;utm_medium=post&amp;amp;utm_campaign=rapid-docker-on-aws-series"&gt;Rapid Docker on AWS&lt;/a&gt;. The book includes code samples for PHP, Ruby (Rails), Python (Django), Java (Spring Boot), and Node.js (Express).&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Rapid Docker on AWS: How to build a Docker image?</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Tue, 05 Nov 2019 19:18:08 +0000</pubDate>
      <link>https://dev.to/andreaswittig/rapid-docker-on-aws-how-to-build-a-docker-image-8ha</link>
      <guid>https://dev.to/andreaswittig/rapid-docker-on-aws-how-to-build-a-docker-image-8ha</guid>
      <description>&lt;p&gt;Shipping software is a challenge. Endless installation instructions explain in detail how to install and configure an application as well as all its dependencies. But in practice, following installation instructions ends with frustration: the required version of PHP is not available to install from the repository, the configuration file is located in another directory, the installation instructions do not cover the operating system you need or want to use, etc.&lt;/p&gt;

&lt;p&gt;And it gets worse: to be able to scale on demand and recover from failure on AWS we need to automate the installation and configuration of our application and its runtime environment. Implementing the required automation with the wrong tools is very time-consuming and error-prone.&lt;/p&gt;

&lt;p&gt;But what if you could bundle your application with all its dependencies and run it on any machine: your MacBook, your Windows PC, your test server, your on-premises server, and your cloud infrastructure? That's what Docker is all about.&lt;/p&gt;

&lt;p&gt;In short: Docker is a toolkit to deliver software.&lt;/p&gt;

&lt;p&gt;Or as Jeff Nickoloff explains in Docker in Action (Manning): "Docker is a command-line program, a background daemon, and a set of remote services that take a logistical approach to solving common software problems and simplifying your experience installing, running, publishing, and removing software. It accomplishes this using a UNIX technology called containers."&lt;/p&gt;

&lt;p&gt;You will learn how to use Docker to ship your web application to AWS in this part of the series. Most importantly, we will show you how to build Docker images to bundle your web application.&lt;/p&gt;

&lt;p&gt;Before we proceed, I want to highlight a few essential best practices when working with Docker containers.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Use one process per container.&lt;/strong&gt; If your application consists of more than one process, split them up into multiple containers. For example, if you run NGINX and PHP-FPM, create two containers. The PHP example in the following section shows how to do that.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do not use SSH.&lt;/strong&gt; Do not install or enable SSH within a container. Use &lt;code&gt;docker attach&lt;/code&gt; to log into a container if needed for debugging. Or even better, optimize logging for debugging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use environment variables instead of configuration files.&lt;/strong&gt; Do not use files to store the configuration for your application. Use environment variables instead. We will cover how to do so in the following section.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use standard output (stdout) and standard error (stderr) for logging.&lt;/strong&gt; Do not write log files. Docker has built-in support to ship log messages from STDOUT and STDERR to various centralized logging solutions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With Docker containers the differences between different platforms like your developer machine, your test system, and your production system are hidden under an abstraction layer. But how do you distribute your application with all its dependencies to multiple platforms? By creating a Docker image. A Docker image is similar to a virtual machine image, such as an Amazon Machine Image (AMI) that is used to launch an EC2 instance. The Docker image contains an operating system, the runtime environment, 3rd party libraries, and your application. The following figure illustrates how you can fetch an image and start a container on any platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ExWwtH7---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/63tnr101ke80f6rd4889.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ExWwtH7---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/63tnr101ke80f6rd4889.jpg" alt="Distribute your application among multiple machines with an Docker image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But how do you create a Docker image for your web application? By creating a script that builds the image step by step: a so called &lt;code&gt;Dockerfile&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the following example you will learn how to dockerize your web application. The example uses a simple web application written in PHP without using any frameworks. &lt;/p&gt;

&lt;p&gt;The sample application uses the following project structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;conf&lt;/code&gt; the configuration directory (contains .ini files)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;css&lt;/code&gt; the stylesheet directory (contains .css files)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;img&lt;/code&gt; the images directory (contains .gif files)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;lib&lt;/code&gt; the libraries directory (contains .php files)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;index.php&lt;/code&gt; the main file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A typical setup to serve a PHP application consists of:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A web server (for example NGINX)&lt;/li&gt;
&lt;li&gt;A PHP process (for example PHP-FPM)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Therefore, we need to run two processes: NGINX and PHP-FPM. However, a container should only run exactly one process at a time. Which means we need to build two images. The following figure shows the two containers: the NGINX container receives the request from the client and forwards PHP requests to the PHP-FPM container. Both containers run on the same host to avoid additional network latency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3xKV6-4c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/20srfk3h9ku7udg7mnoo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3xKV6-4c--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/20srfk3h9ku7udg7mnoo.jpg" alt="Proxy pattern: NGINX and PHP-FPM containers running on the same machine"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start with creating a Docker image for NGINX. NGINX serves the static files. In our example application the static files are stored in the &lt;code&gt;css&lt;/code&gt; and &lt;code&gt;img&lt;/code&gt; directory already. On top of that, NGINX forwards PHP requests to PHP-FPM.&lt;/p&gt;

&lt;p&gt;The following snippet shows the configuration file &lt;code&gt;docker/nginx/default.conf&lt;/code&gt; which tells NGINX to serve static files from &lt;code&gt;/var/www/html&lt;/code&gt; and forward PHP requests to PHP-FPM. You do not need to make any changes to the NGINX configuration when dockerizing your web application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {
    listen       80;
    server_name  localhost;
    root         /var/www/html;
    # pass the PHP scripts to FastCGI server
    # listening on 127.0.0.1:9000
    location ~ \.php$ {
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_param  SCRIPT_NAME      $fastcgi_script_name;
        include        fastcgi_params;
    }
    # redirect / to index.php
    location ~ ^\/$ {
        return 301 $scheme://$http_host/index.php$is_args$args;
    }
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next, you need a &lt;code&gt;Dockerfile&lt;/code&gt; for building your own NGINX image. The following snippet shows the file &lt;code&gt;docker/nginx/Dockerfile&lt;/code&gt; that we created for our sample application.&lt;/p&gt;

&lt;p&gt;The first instruction defines the base image. When creating an image, we don't have to start from scratch. We can use a pre-built base image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM nginx:1.14
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The next instruction copies the NGINX configuration file from your disk to the Docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY docker/nginx/default.conf /etc/nginx/conf.d/default.conf
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The next two instructions copy the &lt;code&gt;css&lt;/code&gt; and &lt;code&gt;img&lt;/code&gt; directories from your local disk to the NGINX root directory &lt;code&gt;/var/www/html/&lt;/code&gt; in the Docker image.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt; Depending on where you are storing the static files of your web application, you'll need to modify these instructions accordingly. Make sure you are copying all static files to &lt;code&gt;/var/www/html/&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY css /var/www/html/css
COPY img /var/www/html/img
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The next instruction runs the chown command to transfer ownership of all static files to the &lt;code&gt;nginx&lt;/code&gt; user. The nginx user is part of the base image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN chown -R nginx:nginx /var/www/html
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Dockerfile&lt;/code&gt; is ready. It's time to build your first image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; php-basic-nginx:latest &lt;span class="nt"&gt;-f&lt;/span&gt; docker/nginx/Dockerfile &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The following table explains the parameters of the &lt;code&gt;docker build&lt;/code&gt; command to build a new image:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Explanation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;-t php-basic-nginx:latest&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Add a tag (name) to the new image.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;-f docker/nginx/Dockerfile&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Location of the Dockerfile.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;.&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Use the current directory as the build context (all paths, e.g. in COPY, are relative to the build context).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The next step is building the PHP-FPM image. The following snippets show the file &lt;code&gt;docker/php-fpm/Dockerfile&lt;/code&gt; used by our sample application.&lt;/p&gt;

&lt;p&gt;The first instruction defines the base image. We are using a base image with PHP 7.3 pre-installed for our sample application.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt; The following versions are supported as well: 7.2 and 7.1.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM php:7.3-fpm-stretch
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Followed by enabling the PHP &lt;a href="https://hub.docker.com/_/php#configuration"&gt;configuration optimized for production workloads&lt;/a&gt; and installing the PHP extensions &lt;code&gt;pdo&lt;/code&gt; and &lt;code&gt;pdo_mysql&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt; Feel free to install &lt;a href="https://hub.docker.com/_/php#how-to-install-more-php-extensions"&gt;additional extensions&lt;/a&gt; if needed.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini"
RUN docker-php-ext-install -j$(nproc) pdo pdo_mysql
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The next commands install &lt;code&gt;envsubst&lt;/code&gt;. You will learn more about how to create configuration files with &lt;code&gt;envsubst&lt;/code&gt; in a second. No need to change anything here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN apt-get update &amp;amp;&amp;amp; apt-get install -y gettext
COPY docker/php-fpm/custom-entrypoint /usr/local/bin/
RUN chmod u+x /usr/local/bin/custom-entrypoint
ENTRYPOINT ["custom-entrypoint"]
RUN mkdir /var/www/html/conf/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Afterwards, the script copies all configuration templates &lt;code&gt;.tmp&lt;/code&gt; located in &lt;code&gt;conf/&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY conf/*.tmp /tmp/conf/
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The following instructions copy the PHP files from your disk to the root directory of PHP-FPM.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt; When dockerizing your own application you will most likely need to modify these COPY instructions to make sure all PHP files are copied to the image. Also, as you add new PHP files to your application, make sure to add them to this list as well.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY *.php /var/www/html/
COPY lib /var/www/html/lib
RUN chown -R www-data:www-data /var/www/html
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The last instruction tells Docker to start the PHP-FPM process by default. You do not need to change anything here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CMD ["php-fpm"]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Dockerfile&lt;/code&gt; is ready. But we have skipped one challenge: the configuration files. We assume you are storing the configuration for your web application within files. When using Docker, and especially when using Fargate, using configuration files is cumbersome. Instead, you should use environment variables to configure your application.&lt;/p&gt;

&lt;p&gt;You have two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modify your application to read all configuration from environment variables.&lt;/li&gt;
&lt;li&gt;Do not modify your application, but use environment variables to create configuration files with &lt;code&gt;envsubst&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the following steps you will learn how to write configuration files on container startup based on environment variables with &lt;code&gt;envsubst&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Our sample application uses a configuration file to configure the database connection: &lt;code&gt;conf/app.ini&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[database]
host=mysql
name=test
user=app
password=secret
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Our goal is to use environment variables for each property. To do so with &lt;code&gt;envsubst&lt;/code&gt;, we need to create a configuration template. The following snippet shows the template &lt;code&gt;conf/app.ini.tmp&lt;/code&gt; for our &lt;code&gt;conf/app.ini&lt;/code&gt; configuration file.&lt;/p&gt;

&lt;p&gt;In the template, each value has been replaced with a placeholder. For example, &lt;code&gt;${DATABASE_HOST}&lt;/code&gt; references the environment variable &lt;code&gt;DATABASE_HOST&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[database]
host="${DATABASE_HOST}"
name="${DATABASE_NAME}"
user="${DATABASE_USER}"
password="${DATABASE_PASSWORD}"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;By default, the &lt;code&gt;Dockerfile&lt;/code&gt; adds all configuration template files &lt;code&gt;.tmp&lt;/code&gt; stored in &lt;code&gt;conf&lt;/code&gt; to the image. Each time a container starts it executes the script located in &lt;code&gt;docker/php-fpm/custom-entrypoint&lt;/code&gt;. The following snippet shows how the &lt;code&gt;custom-entrypoint&lt;/code&gt; script creates the configuration files based on all your configuration templates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "generating configuration files"
FILES=/tmp/conf/*
for f in $FILES
do
  c=$(basename $f .tmp)
  echo "... $c"
  envsubst &amp;lt; $f &amp;gt; /var/www/html/conf/${c}
done
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Customization&lt;/strong&gt; To add your own configuration files, create a configuration template by replacing all dynamic values with placeholders (e.g. &lt;code&gt;$ENV_NAME&lt;/code&gt;). Store the configuration template in the &lt;code&gt;conf&lt;/code&gt; folder. That's it. During runtime the container will create a configuration file based on the template.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next, use the &lt;code&gt;docker build&lt;/code&gt; command to create your PHP-FPM image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; php-basic-php-fpm:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-f&lt;/span&gt; docker/php-fpm/Dockerfile &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;You have successfully built two Docker images. You will learn how to test your Docker images locally in the 3rd part of the series.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Do you have any questions? Please leave them in the comments. This is the 2nd post of a series. Follow me to make sure you are not missing the following posts.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6iVdE_gZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/3yizldjifssugkp45joj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6iVdE_gZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/3yizldjifssugkp45joj.jpg" alt="Rapid Docker on AWS"&gt;&lt;/a&gt; This post is an excerpt from our new book &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/?utm_source=devto&amp;amp;utm_medium=post&amp;amp;utm_campaign=rapid-docker-on-aws-series"&gt;Rapid Docker on AWS&lt;/a&gt;. The book includes code samples for PHP, Ruby (Rails), Python (Django), Java (Spring Boot), and Node.js (Express).&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Rapid Docker on AWS: Which option to choose?</title>
      <dc:creator>Andreas Wittig</dc:creator>
      <pubDate>Mon, 04 Nov 2019 20:12:09 +0000</pubDate>
      <link>https://dev.to/andreaswittig/rapid-docker-on-aws-which-option-to-choose-90j</link>
      <guid>https://dev.to/andreaswittig/rapid-docker-on-aws-which-option-to-choose-90j</guid>
      <description>&lt;p&gt;AWS offers more than 100 services. So much choice is both a curse and a blessing. For example, there are at least four options to run containers on AWS: the Elastic Container Service (ECS), the Elastic Kubernetes Services (EKS), operate your own cluster on top of EC2, and Elastic Beanstalk. Futhermore, you can choose between a lot of database services: RDS, Aurora, DynamoDB, DocumentDB, Elasticsearch, Neptune, Redshift, ElastiCache, to name a few.&lt;/p&gt;

&lt;p&gt;But before you make your choice, you should think about the goals for your cloud-native architecture. &lt;/p&gt;

&lt;p&gt;When designing the Rapid Docker on AWS architecture, we had one goal in mind: a production-ready infrastructure for everyone. We went through a long, iterative design process. We were inspired by numerous customer projects where customers asked us to minimize operational effort or shorten time to market. For years, we have been waiting for new AWS features that would help us achieve our ultimate goal. Now, we are happy to share our solution, which has the following benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Low operational effort
&lt;/h2&gt;

&lt;p&gt;All AWS building blocks used in the architecture described in this book are fully managed services causing minimal operational effort. You don't have to sign TLS certificates or install a database engine. You only have to care about your Docker image with your configuration, source code, and dependencies. AWS even takes care of your data by performing daily backups. AWS also covers security aspects, such as data encryption in-transit (using HTTPS/TLS) and at-rest. In a nutshell, you don't need highly specialized people on your team to operate the infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ready for the future
&lt;/h2&gt;

&lt;p&gt;Architectures are not static anymore. An &lt;a href="https://cloudonaut.io/evolutionary-serverless-architecture/"&gt;evolutionary architecture&lt;/a&gt; evolves with new requirements. When making use of infrastructure as code (IaC), you can deploy changes to your architecture with confidence. IaC enables you to test changes to your architecture before you apply them to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost-effective
&lt;/h2&gt;

&lt;p&gt;In an ideal world, you only pay when your system is processing a user request. You never want to pay for idle resources. The architecture described in this book minimizes your costs for idle resources. The database can stop if not used, and you only run as few containers as possible to provide the needed performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Highly available
&lt;/h2&gt;

&lt;p&gt;Sooner or later, a single point of failure causes an outage. This Rapid Docker on AWS architecture has no single point of failure and uses redundant components everywhere, from the load balancer to the multiple containers running on different hosts to the redundant database cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scalable
&lt;/h2&gt;

&lt;p&gt;Your infrastructure is only as powerful as your weakest component. It is not sufficient to scale your web servers to handle a growing amount of requests. You have to scale your database as well. That's why the architecture described in this book scales the whole stack: the load balancer, the Docker containers running web servers, and the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview of the architecture
&lt;/h2&gt;

&lt;p&gt;The main building blocks of the Rapid Docker on AWS architecture are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A load balancer: &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html"&gt;Application Load Balancer&lt;/a&gt; (ALB)&lt;/li&gt;
&lt;li&gt;Docker containers: &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html"&gt;Amazon ECS&lt;/a&gt; &amp;amp; &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/userguide/Welcome.html"&gt;AWS Fargate&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A database: &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html"&gt;Amazon Aurora Serverless&lt;/a&gt; (MySQL-compatible edition)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following figure shows the high-level architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0hI41spO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/bsaycbghpvtsz3p3cqa3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0hI41spO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/bsaycbghpvtsz3p3cqa3.png" alt="High-level architecture of the web application"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you will learn more about the main building blocks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Load Balancer
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html"&gt;Application Load Balancer&lt;/a&gt; (ALB) distributes HTTP and HTTPS requests among a group of hosts. In our case, the hosts are Docker containers running a web server. An ALB is highly available and scalable by default, and it serves as the entry point into your infrastructure. The traffic flows from the client to your containers as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The client makes an HTTP(S) request to the load balancer.&lt;/li&gt;
&lt;li&gt;The load balancer receives the request and distributes it to one of the containers running a web server.&lt;/li&gt;
&lt;li&gt;The containerized web server receives the request, does some processing, and sends a response.&lt;/li&gt;
&lt;li&gt;The load balancer receives the response and forwards it to the client.&lt;/li&gt;
&lt;li&gt;The client receives the response.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/elasticloadbalancing/pricing/"&gt;ALB pricing&lt;/a&gt; is based on two dimensions. First, you pay $0.0225 per hour (or partial hour) no matter how much traffic it serves. Second, you pay for the higher number of new connections, active connections, or traffic.&lt;/p&gt;

&lt;p&gt;Continue on to learn about the Docker aspect of the architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon ECS &amp;amp; AWS Fargate
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html"&gt;Amazon ECS&lt;/a&gt; is a fault-tolerant and scalable Docker container management service that is responsible for managing the &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_life_cycle.html"&gt;lifecycle&lt;/a&gt; of a container. It supports two different modes or "launch types," one of which is &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/userguide/Welcome.html"&gt;AWS Fargate&lt;/a&gt;, which eliminates the need for managing a cluster of EC2 instances yourself. Instead, the cluster is managed by AWS, allowing you to focus on the containers rather than on the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;ECS is free of charge. &lt;a href="https://aws.amazon.com/fargate/pricing/"&gt;Fargate pricing&lt;/a&gt; is based on the allocated vCPU and memory over time. A vCPU costs $0.04048 per hour and a GB of memory costs $0.004445 per hour. If you provision 0.25 vCPU and 0.5 GB memory for your container, it costs you $8.89 per month (30 days). To avoid a single point of failure, you should always run two containers at a time, totaling $17.78 per month. Pricing is per second with a minimum of 1 minute.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Aurora Serverless
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless.html"&gt;Amazon Aurora Serverless&lt;/a&gt; is a highly available, scalable, MySQL 5.6-compatible relational database cluster. Depending on the load (CPU utilization and the number of connections), the cluster grows or shrinks within configurable boundaries. Aurora Serverless can scale down to zero when there are no connections for a configurable timespan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/rds/aurora/pricing/"&gt;Aurora Serverless pricing&lt;/a&gt; is based on three dimensions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Compute and memory capacity measured in Aurora Capacity Units (ACUs). One ACU has approximately 2 GB of memory with an undocumented share of CPU and networking. You pay $0.06 per ACU per hour with a minimum of one ACUs per cluster. If the cluster is stopped you consume zero ACUs.&lt;/li&gt;
&lt;li&gt;Storage: $0.10 per GB per month.&lt;/li&gt;
&lt;li&gt;I/O rate: $0.20 per 1 million requests.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If your database is of modest size (e.g., 50 GB), you would pay $5 per month for storage. On average, two ACUs should be sufficient for a modest database, which costs another $86.40 per month if your database runs 24/7. However, many databases are not needed 24/7 and can be stopped if not used like dev systems or internal systems that are used during working hours only.&lt;/p&gt;

&lt;p&gt;You have learned about a simple and powerful architecture for your web application consisting of an Application Load Balancer (ALB), ECS, Fargate, and Aurora Serverless.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Do you have any questions? Please leave them in the comments. This is the 1st post of a series. Follow me to make sure you are not missing the following posts.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--13aPreV9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/sldpijrq2g9xli7jtc5v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--13aPreV9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/sldpijrq2g9xli7jtc5v.jpg" alt="Rapid Docker on AWS"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;This post is an excerpt from our new book &lt;a href="https://cloudonaut.io/rapid-docker-on-aws/?utm_source=devto&amp;amp;utm_medium=post&amp;amp;utm_campaign=rapid-docker-on-aws-series"&gt;Rapid Docker on AWS&lt;/a&gt;. The book includes code samples for PHP, Ruby (Rails), Python (Django), Java (Spring Boot), and Node.js (Express).&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>php</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
