<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gregory Ledray</title>
    <description>The latest articles on DEV Community by Gregory Ledray (@gregoryledray).</description>
    <link>https://dev.to/gregoryledray</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gregoryledray"/>
    <language>en</language>
    <item>
      <title>KISS with Docker Compose</title>
      <dc:creator>Gregory Ledray</dc:creator>
      <pubDate>Fri, 26 Jul 2024 17:48:51 +0000</pubDate>
      <link>https://dev.to/gregoryledray/kiss-with-docker-compose-b7m</link>
      <guid>https://dev.to/gregoryledray/kiss-with-docker-compose-b7m</guid>
      <description>&lt;p&gt;KISS - Keep It Simple Stupid - is a design mantra we live by in software engineering. But too often we fail to apply it to AWS infrastructure. I struggle with this because I want to do the right thing. The “right thing” is to follow AWS guidance. Guidance which sells you on replacing open source software with AWS native versions. Yet the more AWS services you use, the harder it is to manage all aspects of those services - necessitating yet more services. It becomes necessary to undergo extensive training to properly understand these services and measure risk, and the complexity of important tasks like disaster recovery balloons.&lt;/p&gt;

&lt;p&gt;KISS with Docker Compose takes a different approach. It takes &lt;del&gt;Bikini Bottom and pushes it somewhere else&lt;/del&gt; a Docker Compose app from your development machine and copies it into an EC2 instance with ports 80 and 443 open (this, and everything else, is configurable). It then pulls your Docker images and starts Docker-Compose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The result? Stupid simple infrastructure which works the same way in the cloud as it does on your development machine.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I’ve created an &lt;a href="https://constructs.dev/packages/kiss-docker-compose" rel="noopener noreferrer"&gt;AWS CDK package&lt;/a&gt; and published it to Github to help you try this approach: &lt;a href="https://github.com/Gregory-Ledray/kiss-docker-compose-on-aws" rel="noopener noreferrer"&gt;https://github.com/Gregory-Ledray/kiss-docker-compose-on-aws&lt;/a&gt; and give a tutorial on how it works below under “Try It Out”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Evaluating KISS Docker Compose
&lt;/h2&gt;

&lt;p&gt;Evaluation needs to be split into three parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Why Bother?&lt;/li&gt;
&lt;li&gt;Can Docker Compose meet your requirements on an infinitely powerful and reliable machine?&lt;/li&gt;
&lt;li&gt;Is an EC2 Instance sufficient to handle the Docker Compose app’s needs?&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why Bother?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cheap: All code runs on one small EC2. Compare this to running one small EC2 for your code + a small RDS instance for your database + a NAT gateway. It’s also far less complex and labor intensive to set up and maintain.&lt;/li&gt;
&lt;li&gt;Simple: It runs the same way on your machine as it runs in the cloud.&lt;/li&gt;
&lt;li&gt;Fast: Works by default.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Can Docker Compose Meet Your Requirements?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/compose/production/#running-compose-on-a-single-server" rel="noopener noreferrer"&gt;Docker supports running Compose on a single server for production&lt;/a&gt;, so &lt;a href="https://docs.docker.com/compose/" rel="noopener noreferrer"&gt;probably yes&lt;/a&gt;. &lt;a href="https://docs.docker.com/compose/production/" rel="noopener noreferrer"&gt;It is production ready&lt;/a&gt; with support for health checks, restarting, secrets, environment configuration, networking, GPUs, etc. If the EC2 instance is infinitely powerful and reliable then Docker Compose is probably enough for you. But therein lies the rub.&lt;/p&gt;

&lt;h3&gt;
  
  
  Is an EC2 Instance Sufficient?
&lt;/h3&gt;

&lt;p&gt;Therein lies the issues, and some good reasons to not use this approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Is the 99.99% uptime of an EC2 instance sufficient?&lt;/li&gt;
&lt;li&gt;Is your application going to exceed the RAM of the EC2 instance? We are able to provide swap space to help, but that only goes so far.&lt;/li&gt;
&lt;li&gt;Is your application going to exceed the file storage of the EC2 instance? By default, we only launch the EC2 instance with 8GB of storage, although this is fully configurable.&lt;/li&gt;
&lt;li&gt;Is your application going to run out of CPU? This configuration doesn’t allow for auto scaling.&lt;/li&gt;
&lt;li&gt;Disaster recovery requires backups. Doing this safely means stopping the instance, creating a backup of the file storage, and then restarting the instance, which means downtime.&lt;/li&gt;
&lt;li&gt;Deployments are simple but cause downtime. Whenever you restart the instance it re-pulls all the Docker Images used by Docker Compose, so to deploy new images you first push those images to the repository and then restart the VM.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When thinking about drawbacks though, we need to consider how solvable these issues are and discount those which are insolvable:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;No single instance of ECS or Fargate will have more than 99.99% uptime because they are all built on EC2, so to get more than 99.99% uptime you’ll need to run several instances. If your application containers are stateful, that requires code changes.&lt;/li&gt;
&lt;li&gt;If your app exceeds RAM limits + Swap in ECS you’ll also see a node failure. Fargate doesn’t even have swap space. If this is a concern, then (1) seems relevant again.&lt;/li&gt;
&lt;li&gt;Exceeding storage is a risk with every approach, but in some recommended architectures some or all of the risk is offloaded to another AWS service like RDS. If you expect to see large growth in storage it’s better to run at least your DB on RDS.&lt;/li&gt;
&lt;li&gt;Running out of CPU is a mostly mitigated risk when you use ECS or Fargate.&lt;/li&gt;
&lt;li&gt;Creating backups of databases is relatively easy and safe with RDS.&lt;/li&gt;
&lt;li&gt;Services like ECS and Fargate deploy changes without downtime.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;KISS Docker Compose works for production systems which tolerate 2-3 minutes of downtime during code deployments and system backups. You must also size your EC2 instance so it can handle all traffic spikes.&lt;/p&gt;

&lt;p&gt;If you have a fairly predictable or small workload, conduct load testing, and tolerate downtime, then it’s a great low-cost way to run a full stack application on AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try KISS Docker Compose
&lt;/h2&gt;

&lt;p&gt;First, follow &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html" rel="noopener noreferrer"&gt;“Getting Started with the AWS CDK”&lt;/a&gt; to be able to create CDK applications. Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir kissdc
cd kissdc
cdk init app --language typescript
npm i aws-cdk-lib
npm i kiss-docker-compose
curl -O https://raw.githubusercontent.com/Gregory-Ledray/kiss-docker-compose-on-aws/main/test/docker-compose.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now open lib/kissdc-stack.ts and copy-paste:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import * as fs from 'fs';
import { KissDockerCompose } from 'kiss-docker-compose'

export class KissdcStack extends cdk.Stack {
    constructor(scope: Construct, id: string, props?: cdk.StackProps) {
        super(scope, id, props);

        const dockerComposeFileAsString = fs.readFileSync('./docker-compose.yml', 'utf8');

        const kissDockerCompose = new KissDockerCompose(this, 'kiss-docker-compose', { dockerComposeFileAsString });

        // Exporting the value so you can find it easily
        new cdk.CfnOutput(this, 'Kiss-Docker-Compose-public-ip', {
            value: kissDockerCompose.ec2Instance?.instancePublicDnsName ?? '',
            exportName: 'Kiss-Docker-Compose-public-ip',
        });
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx cdk deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After deployment finishes, you will see the export Kiss-Docker-Compose-public-ip, as below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ✅  KissdcStack

✨  Deployment time: 268.26s

Outputs:
KissdcStack.KissDockerComposepublicip = ec2-98-80-9-243.compute-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify the deployment worked, curl that URL and verify you get a response from NGINX. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl ec2-98-80-9-243.compute-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, destroy the created infrastructure so you don’t rack up AWS charges:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx cdk destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Evolving AWS Infrastructure
&lt;/h2&gt;

&lt;p&gt;This is Part 1 of a series on Evolving AWS Infrastructure. &lt;a href="https://dev.to/gregoryledray"&gt;Follow me on dev.to&lt;/a&gt; to help find the next posts. This particular post is a follow up to &lt;a href="https://dev.to/gregoryledray/apply-kiss-to-infrastructure-3j6d"&gt;https://dev.to/gregoryledray/apply-kiss-to-infrastructure-3j6d&lt;/a&gt; which outlined this approach but didn’t provide the code needed to make it easy to use.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>webdev</category>
      <category>docker</category>
      <category>awscdk</category>
    </item>
    <item>
      <title>Apply KISS to Infrastructure</title>
      <dc:creator>Gregory Ledray</dc:creator>
      <pubDate>Thu, 14 Oct 2021 22:50:55 +0000</pubDate>
      <link>https://dev.to/gregoryledray/apply-kiss-to-infrastructure-3j6d</link>
      <guid>https://dev.to/gregoryledray/apply-kiss-to-infrastructure-3j6d</guid>
      <description>&lt;p&gt;I took a system which used to have ~120 resources, deleted half of it, and replaced those resources with one VM. The new system isn’t just simpler - &lt;strong&gt;it performs better than the old one.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Oftentimes when I read infrastructure guides for deploying a workload on AWS the guide is walking me through how to use and coordinate a half dozen AWS services to do something simple, like run an API. Don’t do that. Do what I did. Apply KISS (Keep It Simple Stupid) to your infrastructure to meet your infrastructure needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Service: IntelligentRx.com
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://intelligentrx.com"&gt;https://intelligentrx.com&lt;/a&gt; gives people discount coupons for prescriptions. It is written in Vue.js and .NET and runs on AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure Goals
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;It must run the code, preferably the same way it works on localhost&lt;/li&gt;
&lt;li&gt;It must have a high uptime&lt;/li&gt;
&lt;li&gt;It must tell me when it is not working&lt;/li&gt;
&lt;li&gt;It should be easy to set up&lt;/li&gt;
&lt;li&gt;It should be easy to maintain&lt;/li&gt;
&lt;li&gt;It should be inexpensive&lt;/li&gt;
&lt;li&gt;It should be secure&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When I started, I inherited a complex CloudFormation infrastructure which I expanded horizontally as I added more services. A few weeks ago the site’s CloudFormation infrastructure looked like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F739jyu02qxsazjbl1ec0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F739jyu02qxsazjbl1ec0.png" alt="Old Infrastructure Diagram" width="616" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Today, it looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjso8i5o82m60hc38uv7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjso8i5o82m60hc38uv7.png" alt="New Infrastructure Diagram" width="724" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It now contains half of the resources it used to contain.&lt;/p&gt;

&lt;p&gt;Let’s compare these two setups based on the original goals.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Runs my code: Both. Runs my code the same way I run on localhost? Only the new setup.&lt;/li&gt;
&lt;li&gt;High uptime: The new setup may have a higher uptime O.O&lt;/li&gt;
&lt;li&gt;Uptime notifications: Both.&lt;/li&gt;
&lt;li&gt;Setup cost: The original setup was 1,200 lines of infrastructure-as-code yml. The second setup is 680 lines of infrastructure-as-code yml. The original setup contained about 120 Resources managed by CloudFormation. The second setup contains 66 Resources. Not only is the new system easier to set up because it has fewer moving parts, but the parts which are used are less complex.&lt;/li&gt;
&lt;li&gt;Maintenance: Fewer parts =&amp;gt; less maintenance and less knowledge needed for maintenance. The deployment time has gone from ~15 minutes to ~5 minutes for each deployment. Naturally, this shaves 10 minutes off my dev cycle every time I make an infrastructure change.&lt;/li&gt;
&lt;li&gt;Cost: Fewer parts =&amp;gt; cheaper.&lt;/li&gt;
&lt;li&gt;Security: Fewer parts =&amp;gt; fewer opportunities to mess up configuration &amp;amp; less attack surface =&amp;gt; more secure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The second setup is the clear winner.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Changed?
&lt;/h3&gt;

&lt;p&gt;The old setup deployed into ECS (Elastic Container Service [runs container images on EC2 for you]). The new setup builds a new container image, restarts the website which is hosted on a t2.large EC2 VM, and then the VM automatically pulls the new image and runs it with Docker Compose.&lt;/p&gt;

&lt;p&gt;Yep, that’s right: I took a system which used to have ~70 CloudFormation resources between ECS and the networking resources and Redis and put it all on one chonky VM. Right about now some of you are thinking I’m crazy. Let’s look at the most obvious questions / potential problems with this setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  What About Scalability?
&lt;/h3&gt;

&lt;p&gt;My biggest fear with this change was that the website would not be able to handle a sudden influx of requests. ECS is more complex, but it also scales. With a single VM on EC2 “scaling” involves manually changing the VM size. That’s not practical if the site has a sudden burst of traffic.&lt;/p&gt;

&lt;p&gt;To test if the new system can scale, I used &lt;a href="https://loader.io/"&gt;https://loader.io/&lt;/a&gt; to try to figure out how much of a load that one VM can handle. What happens if it receives 100 requests per second? 400 requests per second? As it turns out, this one chonky VM can handle 100 requests per second, every second, for 30 seconds without a sweat:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes5ca1orl5s6ic3n577o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes5ca1orl5s6ic3n577o.png" alt="100 Requests Per Second Graph" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It can handle 400 requests per second too, although the average response time jumps to 1.1 seconds per request:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvnn6skq9mlgeuitechi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvnn6skq9mlgeuitechi.png" alt="400 Requests Per Second Graph" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Regardless, being able to service 400 requests per second is more than enough for this website.&lt;/p&gt;

&lt;h3&gt;
  
  
  But What About Other Stress Tests?
&lt;/h3&gt;

&lt;p&gt;Other stress tests can cause the website to fail, but not for want of trying: the parts which fail are 3rd party, not this VM or the AWS infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  So You Traded Away Reliability and Scalability For Simplicity?
&lt;/h3&gt;

&lt;p&gt;Yes. With EC2, the website goes down for 2 - 3 minutes whenever I deploy a change. If the underlying VM fails, the VM is automatically restarted and the website comes up shortly thereafter.&lt;/p&gt;

&lt;h3&gt;
  
  
  [Edit] When you restart the VM, isn't there downtime?
&lt;/h3&gt;

&lt;p&gt;Yes. As commenters pointed out, this does cause prod downtime while the VM restarts, pulls the new image, etc (1 - 3 minutes in my experience). I compensate in my frontend code with (pseudocode):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (prodAPICallFailed) {
  fetchWithRetries('staging.example.com/api/endpoint');
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This only works if your frontend is hosted somewhere else, like in CloudFront. Readers of this post, please recognize that this approach has some asterisks. There are good performance, observability, and reliability reasons for why I still have 600 lines of yml in my CloudFormation template. For example, I use CloudFront to distribute the frontend's code, stored in S3, via a CDN. However, these other features and components have been easy to maintain and deploy quickly, unlike the dozens of components required to set up load balancing to auto-scaling ECS, which this post's approach replaces.&lt;/p&gt;

&lt;h3&gt;
  
  
  Does This Really Save Time?
&lt;/h3&gt;

&lt;p&gt;YES! Ways I have saved time:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I went from 15 minutes to test infrastructure changes down to 5 minutes because deployments are faster.&lt;/li&gt;
&lt;li&gt;By using &lt;a href="https://docs.docker.com/compose/gettingstarted/"&gt;Docker Compose&lt;/a&gt;, I know that code which works on my machine works the same way on this VM, giving me a lot of peace of mind. I also gained the ability to make and test some configuration changes locally.&lt;/li&gt;
&lt;li&gt;I’m resolving infrastructure bugs faster because the key part of the infrastructure - where I run the code - is now using widely used open source technologies like Docker and Systemd. Googling for answers to my problems has become trivial instead of a headache.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;When I make a code change and push, the CI/CD pipeline kicks off:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The continuous deployment pipeline creates a new docker image or three and pushes it / them to ECR.&lt;/li&gt;
&lt;li&gt;CloudFormation is updated (CloudFormation still holds most of the infrastructure)&lt;/li&gt;
&lt;li&gt;The VM which runs the code is restarted.&lt;/li&gt;
&lt;li&gt;When booting up, the VM runs a systemd daemon.&lt;/li&gt;
&lt;li&gt;The daemon’s pre-start script pulls the updated container image into the VM.&lt;/li&gt;
&lt;li&gt;The daemon’s command runs &lt;code&gt;docker-compose up&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;As the application generates logs, they are sent to CloudWatch.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These are the steps you can take to develop such a system yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create or refactor an app to use Docker Compose.&lt;/li&gt;
&lt;li&gt;Test with &lt;code&gt;docker-compose up&lt;/code&gt; on localhost.&lt;/li&gt;
&lt;li&gt;In your Continuous Deployment stage of your CI/CD pipeline, build a Docker image and push it into an ECR repository called “dev” or “prod” or whatever you call your environment.&lt;/li&gt;
&lt;li&gt;In your deployment script, run these commands or something similar AFTER you have pushed the new image:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;INSTANCE_ID=$(aws ec2 describe-instances --filters Name=tag:Name,Values=host-$CURRENT_ENVIRONMENT --region us-east-1 --output text --query 'Reservations[*].Instances[*].InstanceId')

echo $INSTANCE_ID

aws ec2 reboot-instances --instance-ids $INSTANCE_ID
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In AWS, create a VM.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cloudaffaire.com/how-to-install-docker-in-aws-ec2-instance/"&gt;Install Docker on the VM&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.docker.com/config/containers/logging/awslogs/"&gt;Set up logging so that your Docker Compose logs flow to CloudWatch&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://stackoverflow.com/questions/43671482/how-to-run-docker-compose-up-d-at-system-start-up"&gt;Create a Systemd file which runs Docker Compose at startup&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Edit that Systemd file to include &lt;code&gt;ExecStartPre=/home/ec2-user/docker-compose-setup.sh&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create /home/ec2-user/docker-compose-setup.sh with these contents so that when your VM restarts &amp;amp; the systemd file is run, the first thing it does is pull an up to date copy of your Docker image:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh
/usr/local/bin/aws ecr get-login-password --region us-east-1 | /usr/bin/docker login --username AWS --password-stdin [your-aws-account-number].dkr.ecr.us-east-1.amazonaws.com
# Note: The production image (AMI) uses the following line:
# /usr/bin/docker pull [your-aws-account-number].dkr.ecr.us-east-1.amazonaws.com/prod:latest
/usr/bin/docker pull [your-aws-account-number].dkr.ecr.us-east-1.amazonaws.com/dev:latest
/usr/local/bin/docker-compose down
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;code&gt;chmod +x /home/ec2-user/docker-compose-setup.sh&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Restart the VM.&lt;/li&gt;
&lt;li&gt;Your application should now be running on localhost. You can test if the website is publicly accessible via its ephemeral IP address. You can also attach an elastic IP address and then point your website at this IP address.&lt;/li&gt;
&lt;li&gt;Create an image of this VM. If you are using CloudFormation or another service, reference this AMI (Amazon Machine Image) and you can integrate this VM into your CloudFormation / other service template.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Recap
&lt;/h2&gt;

&lt;p&gt;I cut deployment times, made debugging easier, and did it all in just a couple billable hours. When I started, I didn’t know anything about systemd or EC2 or Docker Compose, but now that I’ve experienced these tools I’m definitely going to use them again. I’d encourage you to give it a try too! Perhaps you will see the same time savings I am currently enjoying.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Developers Aren't Essential because they Write Code</title>
      <dc:creator>Gregory Ledray</dc:creator>
      <pubDate>Fri, 16 Apr 2021 17:30:26 +0000</pubDate>
      <link>https://dev.to/gregoryledray/developers-aren-t-essential-because-they-write-code-1k2i</link>
      <guid>https://dev.to/gregoryledray/developers-aren-t-essential-because-they-write-code-1k2i</guid>
      <description>&lt;p&gt;They are essential because they know things and think logically. You can build a website without a developer using &lt;a href="https://squarespace.com"&gt;Squarespace&lt;/a&gt; and you can do it for free with &lt;a href="https://wordpress.org"&gt;WordPress&lt;/a&gt;. You can build a form based app with &lt;a href="//servicenow.com"&gt;ServiceNow&lt;/a&gt; and you can do it for free with (shameless plug for my free tool) &lt;a href="https://gitlab.com/polyapp-open-source/polyapp"&gt;Polyapp&lt;/a&gt;. You can build video games in &lt;a href="https://unity.com"&gt;Unity&lt;/a&gt;. There are similar no-code tools which let you build apps for Android and iOS.&lt;/p&gt;

&lt;p&gt;A lot of people will jump at this and say, "but without a developer, you can't calculate X Y Z!" or "those tools aren't flexible enough to replace the app I'm building!" Obviously someone needs to code everything, but in large developer ecosystems someone has probably already coded something just like what you're writing and your job is reduced to configuring that thing. Most of the code I used to write was configuring components someone else wrote on the front end and transforming data from the database into the UI and back again on the back end. Even those things are just configuration. HTML is configuring components defined by the HTML standard. Typing out a data model in C# and setting up Entity Framework is just a way to configure the .NET framework. You could write a UI with all of the different choices for HTML and Entity Framework model choices as a form, and have someone select from the list of choices and all of that work would be poof - gone. Incidentally, this is sort of how &lt;a href="https://gitlab.com/polyapp-open-source/polyapp"&gt;Polyapp&lt;/a&gt; works.&lt;/p&gt;

&lt;p&gt;Most application coding doesn't need to be done by someone with knowledge of how to set up Visual Studio or the intricacies of planning and developing an object oriented data model. Most development work could be done by someone who has a problem sitting down, finding a template for a solution to their problem, and then going through many, many, many options describing the solution and adjusting them to fit their needs. Doing things themselves would save them a ton of time and heartache when the thing they paid for doesn't turn out the way they wanted. But that's not what happens.&lt;/p&gt;

&lt;p&gt;It's not what happens because small fractions of every applications require a developer, and developers aren't educated to use a hybrid model with some no-code development and some code. They believe the only options are 100% coded by me or 0% coded by me.&lt;/p&gt;

&lt;p&gt;It's not what happens because most people have no clue that low-code or no-code tools exist, and even if they do they don't understand concepts like foreign key columns and one-to-many references.&lt;/p&gt;

&lt;p&gt;It's not what happens because creating apps requires dozens of steps and most people aren't good at logically thinking through each step.&lt;/p&gt;

&lt;p&gt;It's not what happens because developers know things and think logically.&lt;/p&gt;

</description>
      <category>html</category>
      <category>dotnet</category>
      <category>database</category>
    </item>
  </channel>
</rss>
