<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: TinyStacks, Inc.</title>
    <description>The latest articles on DEV Community by TinyStacks, Inc. (@tinystacks).</description>
    <link>https://dev.to/tinystacks</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tinystacks"/>
    <language>en</language>
    <item>
      <title>API Gateway REST vs. HTTP API: What Are The Differences?</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Tue, 01 Feb 2022 19:51:06 +0000</pubDate>
      <link>https://dev.to/tinystacks/api-gateway-rest-vs-http-api-what-are-the-differences-2nj</link>
      <guid>https://dev.to/tinystacks/api-gateway-rest-vs-http-api-what-are-the-differences-2nj</guid>
      <description>&lt;p&gt;****Follow &lt;a href="https://twitter.com/FrancescoCiull4" rel="noopener noreferrer"&gt;Francesco on Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Article by Jay Allen&lt;/p&gt;

&lt;p&gt;AWS API Gateway is a great technology for managing and securing access to your backend REST APIs. However, AWS currently supports two very different versions of the technology. What are the differences? And which one should you use? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-vs-rest.html" rel="noopener noreferrer"&gt;AWS covers the basics of the differences between these two technologies&lt;/a&gt; in its documentation. In this article, I plan to dive a little deeper by discussing some of the ways missing features from one version of API Gateway can be supported in the other. I'll also give some proscriptive recommendations around which version to use and when. &lt;/p&gt;

&lt;h2&gt;
  
  
  V1 vs. V2: Avoiding A Nasty Shock
&lt;/h2&gt;

&lt;p&gt;AWS released the first version of API Gateway in 2015 with support for REST APIs. Over the next several years, &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/history.html" rel="noopener noreferrer"&gt;AWS added numerous features to its REST API support&lt;/a&gt;. These included support for authentication via Cognito user pools, exposing private APIs publicly via VpcLink, and canary deployment support, among many others.&lt;/p&gt;

&lt;p&gt;Then in 2019, AWS announced that, based on customer feedback, &lt;a href="https://aws.amazon.com/blogs/compute/announcing-http-apis-for-amazon-api-gateway/" rel="noopener noreferrer"&gt;it had developed a new version of API Gateway&lt;/a&gt;. This V2 version included support for "HTTP APIs" (effectively REST APIs) as well as &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API" rel="noopener noreferrer"&gt;WebSocket APIs&lt;/a&gt;. A major goal of the change, AWS said, was to simplify the API Gateway model and make it easier to develop and deploy new APIs. &lt;/p&gt;

&lt;p&gt;However, Amazon sowed a lot of confusion with this "new" API Gateway. First off, "HTTP API" is something of an odd naming, given that REST is a framework built &lt;em&gt;on top of&lt;/em&gt; the HTTP protocol. I'm at a loss to understand why AWS chose a naming convention that makes them sound like diametric opposites. &lt;/p&gt;

&lt;p&gt;Second, the name obfuscates that these are two separate versions of the same technology. This only really becomes clear if you look in one of two places: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CloudFormation syntax, where REST API and HTTP API syntax have separate namespaces (AWS::ApiGateway and AWS::ApiGatewayV2). &lt;/li&gt;
&lt;li&gt;The AWS Console, where creating a REST API versus an HTTP API gives you two completely separate user experiences. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1642964591605%2FEygyVS6rM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1642964591605%2FEygyVS6rM.png" alt="AWS API Gateway - REST API"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1642964702929%2FyWhYBrjEf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1642964702929%2FyWhYBrjEf.png" alt="AWS API Gateway - HTTP API"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above are screenshots of the REST API (first image) and the HTTP API (second image) user interface. As you can see, there's quite a change between V1 and V2. The changes extend, not just to UI organization, but to what features are available - and even down to the price and performance of each system. &lt;/p&gt;

&lt;p&gt;In other words, before deciding which version of API Gateway to use, you should understand the differences between V1 and V2 in detail. And when you're researching information on the Web, be careful to identify whether the feature you're reading about is supported in REST APIs, HTTP APIs, or both. &lt;/p&gt;

&lt;h2&gt;
  
  
  Performance and Price Differences
&lt;/h2&gt;

&lt;p&gt;The major difference between REST APIs and HTTP APIs are in performance and price. In short, HTTP APIs are the winner in both. &lt;/p&gt;

&lt;p&gt;Both REST APIs and HTTP APIs only charge for the number of requests actually made plus data transferred out of AWS. However, the difference in pricing is steep. REST APIs will run you USD $3.50 per one million requests plus charges for data transferred out. By contrast, HTTP APIs cost a mere $1.00 per request for the first million requests and then $0.90 per million requests after that. That's a whopping 71% price differential.&lt;/p&gt;

&lt;p&gt;On top of that, AWS says that V2 HTTP APIs contain significant performance improvements over their V1 REST brethren. &lt;a href="https://cloudonaut.io/review-api-gateway-http-apis/" rel="noopener noreferrer"&gt;Andreas Wittig at Cloudonaut ran some numbers&lt;/a&gt; and found a 14 to 16% improvement in latency in HTTP APIs compared to REST APIs. &lt;/p&gt;

&lt;p&gt;As Andreas notes, the latency differential isn't that great. And odds are most of it will be wiped out by dependencies on other components, such as your database. So HTTP APIs are a clear winner in price and a small winner in performance. &lt;/p&gt;

&lt;h2&gt;
  
  
  Features in REST APIs (But Not in HTTP APIs)
&lt;/h2&gt;

&lt;p&gt;So HTTP APIs are a clear winner when it comes to pricing. But, as I've noted before, price isn't everything. You can justify a higher cost if you're getting something in return for it. &lt;/p&gt;

&lt;p&gt;Both REST APIs and HTTP APIs have features the others don't. Let's take a look at each, starting with the features in REST APIs that HTTP APIs lack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Canary Support
&lt;/h3&gt;

&lt;p&gt;Truth be told, my motivation for writing this article was that I wanted to build &lt;a href="https://blog.tinystacks.com/canary-testing-backend-api-aws" rel="noopener noreferrer"&gt;an API Gateway deployment with canary support&lt;/a&gt;. I'd heard that API Gateway supported canaries. Since I'm a big fan of API Gateway and its capabilities, I rushed into coding. &lt;/p&gt;

&lt;p&gt;Unfortunately, what I started creating was an HTTP API. Imagine my shock and disappointment when I realized that HTTP APIs have no support for canary deployments! This is strictly a feature of REST APIs that V2 lacks. &lt;/p&gt;

&lt;p&gt;One workaround is to incorporate an Application Load Balancer into your architecture. ALBs also support weighted routing, which allows you to implement canary-style deployments. &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/best-practices-api-gateway-private-apis-integration/http-api.html" rel="noopener noreferrer"&gt;Using an ALB with API Gateway is an established pattern that provides extra security&lt;/a&gt;, as you can host your ALB in a private subnet. You can then use private integrations and VPC Link in API Gateway to route API requests to endpoints in this private subnet. &lt;/p&gt;

&lt;p&gt;This pattern carries the added benefit of limiting your Docker container's exposure to the Internet. You can host your container completely in a private subnet and expose only those REST endpoints you want made publicly available. For more information on implementing this pattern, &lt;a href="https://aws.amazon.com/blogs/compute/configuring-private-integrations-with-amazon-api-gateway-http-apis/" rel="noopener noreferrer"&gt;see this article on the AWS Web site&lt;/a&gt;, which comes complete &lt;a href="https://github.com/aws-samples/aws-apigw-http-api-private--integrations/blob/main/templates/APIGW-HTTP-private-integration-ALB-ecs.yml" rel="noopener noreferrer"&gt;with an out of the box CloudFormation template&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Another workaround is to use Route 53's weighted routing feature. (&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/blue-green-deployments/update-dns-routing-with-amazon-route-53.html" rel="noopener noreferrer"&gt;AWS has docs on how to do&lt;/a&gt; this in the context of blue/green deployments.) In this case, you'd create another stage (e.g., &lt;code&gt;canary&lt;/code&gt;) in your API Gateway. You'd then use weighted routing to route a percentage of traffic to the canary stage, gradually shifting over traffic as you verified the new version's performance. This will work but rollout may be slow due to DNS propagation delays.&lt;/p&gt;

&lt;h3&gt;
  
  
  Web Application Firewall (WAF) Support
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/waf/" rel="noopener noreferrer"&gt;AWS's WAF&lt;/a&gt; provides an additional level of security for Web apps. Using WAF, you can apply both pre-made and custom traffic security rules that filter out bots and known exploit vectors. WAF can both keep your application more secure as well as reduce illegitimate, bandwidth-wasting traffic. &lt;/p&gt;

&lt;p&gt;API Gateway supports WAF. That is, if you use the REST API. HTTP APIs do not currently support WAF and there's no indication when they might. &lt;/p&gt;

&lt;p&gt;If you use the architecture I mention above, you can work around this by turning on WAF on your private Application Load Balancer. ALB supports WAF, which means you can get the benefits of WAF while still enjoying the lower cost and higher performance of HTTP APIs. &lt;/p&gt;

&lt;h3&gt;
  
  
  Support for AWS X-Ray
&lt;/h3&gt;

&lt;p&gt;X-Ray is AWS's service to add tracing and debugging instrumentation to your code. With X-Ray, you can monitor code and service performance, report errors, and troubleshoot the root cause of issues affecting your callers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/xray/" rel="noopener noreferrer"&gt;REST APIs have check-button support for adding X-Ray&lt;/a&gt; to your API Gateway calls. Sadly, as of this writing, this feature doesn't exist in HTTP APIs.&lt;/p&gt;

&lt;p&gt;For teams that use their one home-brewed tracing option or a commercial one like &lt;a href="https://newrelic.com/" rel="noopener noreferrer"&gt;New Relic&lt;/a&gt;, this won't be a huge deal. And others can still use X-Ray directly from their code through their programming language's AWS SDK or via the AWS CLI. So, while this is a nice-to-have, it's not necessarily a deal breaker. &lt;/p&gt;

&lt;h2&gt;
  
  
  Features in REST APIs (But Not in HTTP APIs)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Better Programmatic Model
&lt;/h3&gt;

&lt;p&gt;The programmatic model is one area where HTTP APIs shine. One of AWS's avowed motivations in CREATING HTTP APIs was that the REST API was too complicated. HTTP APIs use a simplified programming model and a new and improved user interface in the console. &lt;/p&gt;

&lt;p&gt;Additionally, HTTP APIs support several ease of use dev features that REST APIs don't, including direct support for CORS configuration, automatic deployments, and a default stage and route. However, REST APIs do support a couple of dev features, such as request body transformation, that aren't directly supported as of this writing in HTTP APIs. &lt;/p&gt;

&lt;p&gt;However, REST APIs make development easier in one crucial way: they support importing API definitions from &lt;a href="https://swagger.io/specification/" rel="noopener noreferrer"&gt;OpenAPI&lt;/a&gt; definitions supported by Swagger and other API definition/documentation tools. HTTP APIs can only export to OpenAPI. &lt;/p&gt;

&lt;h3&gt;
  
  
  Private Integrations
&lt;/h3&gt;

&lt;p&gt;Private integrations allow API Gateway to expose resources hosted in a private VPC. Using a private integration, you can host your API sources (e.g., Docker containers) inside a private subnet while exposing only the endpoints you want to expose publicly through API Gateway. This results in enhanced security.&lt;/p&gt;

&lt;p&gt;HTTP APIs contain full-fledged support for Application Load Balancers, Network Load Balancers, and AWS Cloud Map. Support is enabled via VPC Link, which allows you to create a route from your API Gateway to a private subnet. VPC Link is easy to configure and assign in both the console and in CloudFormation. (&lt;a href="https://github.com/aws-samples/aws-apigw-http-api-private--integrations" rel="noopener noreferrer"&gt;You can find a working CloudFormation example here&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;REST APIs support only Network Load Balancers. Additionally, the configuration isn't nearly as straightforward as it is with VPC Link. &lt;/p&gt;

&lt;p&gt;Private integrations shouldn't be confused with &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.html" rel="noopener noreferrer"&gt;private APIs&lt;/a&gt;. With private APIs, you can use API Gateway to define an API that's only available via a VPC. Calls to the API stay within the VPC and never route through the public Internet. Only REST APIs support private APIs. &lt;/p&gt;

&lt;h3&gt;
  
  
  Native OpenID Connect / OAuth 2.0
&lt;/h3&gt;

&lt;p&gt;Finally, HTTP APIs have native support for OpenID Connect and OAuth 2.0. This is the one authentication framework that REST APIs don't natively support. &lt;/p&gt;

&lt;p&gt;It's certainly possible &lt;a href="https://aws.amazon.com/blogs/security/use-aws-lambda-authorizers-with-a-third-party-identity-provider-to-secure-amazon-api-gateway-rest-apis/" rel="noopener noreferrer"&gt;to implement your own OAuth 2.0 support in REST APIs&lt;/a&gt;. But this is a heavier lift than just using HTTP API's native support. &lt;/p&gt;

&lt;h2&gt;
  
  
  Which to Use?
&lt;/h2&gt;

&lt;p&gt;There are a few other feature differences I didn't cover here. It's best to familiarize yourself with the documentation and see what you would get - and what you'd miss - by picking one over the other. To make things easier, here's a quick run-down in table format: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1643650359986%2F5OkQgdYoN.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1643650359986%2F5OkQgdYoN.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, HTTP APIs have some catching up to do in terms of feature support. On the other hand, HTTP APIs are drastically more economical, performant, and easy to use. &lt;/p&gt;

&lt;p&gt;In my view, you may choose to go with REST APIs if they choose some critical features that will make administration easier and reduce time to market (e.g., OpenAPI import). However, the overall improvements in HTTP APIs make them the default choice for most projects. You can directly implement most of the features unique to REST APIs without great effort. Plus, you'll get the benefit of future usability, price, and performance improvements as AWS continues investing in this new service. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Header image credit: Unsplash&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Aiven VS AWS</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Mon, 24 Jan 2022 21:02:30 +0000</pubDate>
      <link>https://dev.to/tinystacks/aiven-vs-aws-35ff</link>
      <guid>https://dev.to/tinystacks/aiven-vs-aws-35ff</guid>
      <description>&lt;p&gt;Francesco on &lt;a href="https://twitter.com/FrancescoCiull4" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;Author of the article: Jay Allen.&lt;/p&gt;

&lt;p&gt;Aiven is a new company that aims to simplify data storage and management in the cloud. In this article, I look at the benefits Aiven provides, its pricing model, and how their pricing compares to directly hosting your data services on AWS. I also consider when it makes sense to use Aiven vs. hosting on AWS directly. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem of "Cloud Sprawl"
&lt;/h2&gt;

&lt;p&gt;As we've discussed on this blog before, modern cloud providers have become insanely complex. Infrastructure as a Service (IaaS) companies like AWS continue to add an impressive array of features and services every month. &lt;/p&gt;

&lt;p&gt;However, while that's made cloud services more useful, it's also made them harder to understand. Developers new to the cloud have to understand a huge host of similar-looking services and features before they can even make fundamental architecture decisions. &lt;/p&gt;

&lt;p&gt;At the same time, this sprawl has made cloud dashboards much harder to use. Many AWS users complain about how hard it is to navigate the AWS Console in its current state. &lt;/p&gt;

&lt;p&gt;In response, we've seen the rise of  &lt;a href="https://www.bmc.com/blogs/saas-vs-paas-vs-iaas-whats-the-difference-and-how-to-choose/" rel="noopener noreferrer"&gt;Platform as a Service (PaaS)&lt;/a&gt;. PaaS companies like Heroku and service like Google App Engine aim to reduce the complexity of deploying software applications by providing an out of the box application stack consisting of data storage, virtual servers, virtual networking, and other foundational services. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is Aiven?
&lt;/h2&gt;

&lt;p&gt;Aiven is a PaaS product focused on data management. With Aiven, users can spin up a vast array of data storage and search services on various IaaS providers, including AWS, Google Cloud, and Microsoft Azure. &lt;/p&gt;

&lt;p&gt;After you create an account on Aiven, you can spin up one or more of a larger number of data storage services, including event messaging (Kafka), relational and object-relational data storage (MySQL, PostgreSQL), time series databases (InfluxDB, M3DB), in-memory caching (Redis), and several others. &lt;/p&gt;

&lt;p&gt;Once you select a service, you can select to host your infrastructure on a number of cloud providers: AWS, Google Cloud, Microsoft Azure, DigitalOcean, and UpCloud. You can also select a hosting plan, which determines how much memory and processor power your service can access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1642185774045%2FszDy_-Utu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1642185774045%2FszDy_-Utu.png" alt="Aiven - installing PostgreSQL with Startup-4 plan"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once your service is up and running, you can see the details on your Aiven dashboard. From here, you can access connection information and connect to your data host. Aiven also provides easy access to additional information about your service, including logs, connection pools, metrics, and backups. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1642186388490%2FWXgTrSaN-.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1642186388490%2FWXgTrSaN-.png" alt="Aiven - PostgreSQL dashboard after instance creation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Aiven Features
&lt;/h2&gt;

&lt;p&gt;One of Aiven's greatest features is its ease of use. The Aiven UI is far and away easier to navigate than most cloud consoles. Creating a new data service generally takes a few clicks. And Aiven offers easy access to data (metrics, users, etc.) that would require a lot of customized setup if you were creating the service directly on a cloud provider. &lt;/p&gt;

&lt;p&gt;Aiven also offers hosting flexibility. With support for five major IaaS providers, teams that use Aiven can easily locate their data hosting in the same cloud provider and even the same region as their application. &lt;/p&gt;

&lt;p&gt;Since it's a PaaS, Aiven generally offers "black box" hosting. In other words, data services are hosted on cloud service accounts owned and operated by Aiven. However, customers with over $5000/month of spend can contact Aiven to arrange for direct hosting on their own cloud service accounts. &lt;/p&gt;

&lt;p&gt;Finally, Aiven supports a number of advanced features for migration and monitoring The company &lt;a href="https://developer.aiven.io/docs/products/postgresql/howto/list-replication-migration.html" rel="noopener noreferrer"&gt;supports its own aiven-db-migrate tool&lt;/a&gt; for migrating from an existing PostgreSQL database to Aiven. Aiven can also integrate with a number of different alerting and monitoring systems, including AWS CloudWatch Logs and Metrics, DataDog, Promethseus, and Syslog. (You can also set up your own metrics dashboarding easily with an Aiven-hosted Grafana dashboard.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Aiven Pricing vs. AWS Pricing
&lt;/h2&gt;

&lt;p&gt;Beyond features, however, we were also interested in the pricing model. How much does it cost to run a database service in Aiven versus, say, running it directly on AWS? &lt;/p&gt;

&lt;p&gt;You won't be surprised to learn that Aiven costs more than vanilla AWS. That's only natural: it's a business providing a service. In this case, the service includes automation of data storage service creation, a slick management user interface, and the ability to create and manage resources cross-cloud. &lt;/p&gt;

&lt;p&gt;But what's the cost? And is it worth it? The answer, as always, is: it depends on your scenario. &lt;/p&gt;

&lt;p&gt;We ran the numbers on PostgreSQL hosting and compared using Aiven to running an equivalent-sized PostgreSQL instance directly on AWS. For example, Aiven's Startup-4 plan gives you 2 CPUs and 4GB of RAM and a single database instance . So we correlated this with an RDS PostgreSQL db.t4g.medium instance, which supports the same hardware configuration, hosted in a single Availability Zone. All RDS hardware specification data  &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html#Concepts.DBInstanceClass.Summary" rel="noopener noreferrer"&gt;was derived from the AWS Web site&lt;/a&gt; and all AWS prices were calculated using the &lt;a href="https://calculator.aws/" rel="noopener noreferrer"&gt;pricing calculator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Below is a brief summary of the pricing differences for Aiven's startup plans: &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aiven&lt;/th&gt;
&lt;th&gt;Aiven pricing&lt;/th&gt;
&lt;th&gt;AWS alt&lt;/th&gt;
&lt;th&gt;AWS pricing&lt;/th&gt;
&lt;th&gt;Monthly savings&lt;/th&gt;
&lt;th&gt;% raw savings&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Startup-4&lt;/td&gt;
&lt;td&gt;$99.00&lt;/td&gt;
&lt;td&gt;db.t4g.medium&lt;/td&gt;
&lt;td&gt;$56.65&lt;/td&gt;
&lt;td&gt;$42.35&lt;/td&gt;
&lt;td&gt;42.78%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Startup-8&lt;/td&gt;
&lt;td&gt;$195.00&lt;/td&gt;
&lt;td&gt;db.t4g.large&lt;/td&gt;
&lt;td&gt;$114.30&lt;/td&gt;
&lt;td&gt;$80.70&lt;/td&gt;
&lt;td&gt;41.38%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Startup-16&lt;/td&gt;
&lt;td&gt;$310.00&lt;/td&gt;
&lt;td&gt;db.r6g.large&lt;/td&gt;
&lt;td&gt;$204.50&lt;/td&gt;
&lt;td&gt;$105.50&lt;/td&gt;
&lt;td&gt;34.03%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Startup-32&lt;/td&gt;
&lt;td&gt;$640.00&lt;/td&gt;
&lt;td&gt;db.r6g.xlarge&lt;/td&gt;
&lt;td&gt;$409.00&lt;/td&gt;
&lt;td&gt;$231.00&lt;/td&gt;
&lt;td&gt;36.09%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Startup-64&lt;/td&gt;
&lt;td&gt;$1,200.00&lt;/td&gt;
&lt;td&gt;db.r6g.2xlarge&lt;/td&gt;
&lt;td&gt;$771.27&lt;/td&gt;
&lt;td&gt;$428.73&lt;/td&gt;
&lt;td&gt;35.73%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Startup-120&lt;/td&gt;
&lt;td&gt;$2,140.00&lt;/td&gt;
&lt;td&gt;db.r6g.4xlarge&lt;/td&gt;
&lt;td&gt;$1,473.54&lt;/td&gt;
&lt;td&gt;$666.46&lt;/td&gt;
&lt;td&gt;31.14%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Startup-240&lt;/td&gt;
&lt;td&gt;$4,280.00&lt;/td&gt;
&lt;td&gt;db.r6g.8xlarge&lt;/td&gt;
&lt;td&gt;$2,947.81&lt;/td&gt;
&lt;td&gt;$1,332.19&lt;/td&gt;
&lt;td&gt;31.13%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Startup-360&lt;/td&gt;
&lt;td&gt;$8,700.00&lt;/td&gt;
&lt;td&gt;db.m5.24xlarge&lt;/td&gt;
&lt;td&gt;$6,582.12&lt;/td&gt;
&lt;td&gt;$2,117.88&lt;/td&gt;
&lt;td&gt;24.34%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Setting aside implementation costs for a moment, the charge for hosting your database directly on AWS will cost around 38% less than hosting it on Aiven. That price difference does include some networking costs that Aiven covers on your behalf. However, many of those charges can  be avoided by proper placement of your AWS resources (e.g., running your RDS instance in the same VPC as your application, or using VPC peering to avoid Internet data transfer charges).&lt;/p&gt;

&lt;p&gt;Of course, the bill is only half the story. You can't - and shouldn't - ignore how many person-hours or vendor dollars it might take to implement a direct IaaS solution. This will depend on how seasoned your staff is at data storage management and what reusable deployment and configuration solutions you already have available. If your team is starting from scratch with little cloud data management experience, Aiven will likely pay for itself. &lt;/p&gt;

&lt;h2&gt;
  
  
  When Direct Hosting on AWS Makes Sense
&lt;/h2&gt;

&lt;p&gt;Does that mean you shouldn't use Aiven? Far from it. If your team doesn't have a data expert who's skilled in the various technologies that Aiven supports, its ease of use can save you significant time and money. Aiven's direct logging and metrics support may also save you dev dollars. And if you're pursuing a multi-cloud deployment strategy, Aiven's ability to deploy to all major cloud providers is a huge point in its favor.&lt;/p&gt;

&lt;p&gt;However, if you don't have a multi-cloud strategy, the cost of Aiven may be more than it's worth.   One way to answer this question for your team is to consider how much data you're storing. &lt;/p&gt;

&lt;p&gt;Aiven charges a flat rate for data storage even if you don't use the entire allocation. By contrast, AWS only charges you for the data storage you actually use. And AWS charges far less than Aiven for the same amount of data.&lt;/p&gt;

&lt;p&gt;On Aiven, you'll pay around $5 per every extra 5GB of storage on Aiven. Aiven gives small discounts the more storage you buy; e.g., an extra 80GB costs around $42/mo. instead of $50. But this still contrasts sharply with AWS, where an extra 10GB of storage only costs a little over $1 a month. &lt;/p&gt;

&lt;p&gt;For example, under Aiven's Startup-4 plan, you receive up to 80GB of storage. If you use less than this on AWS, you'll save a few extra dollars a month. But you'll also have a lot more room to grow on AWS. In this configuration, you can store up to 450GB on a single-AZ configuration of PostgreSQL before you're paying as much as you pay to use Aiven. &lt;/p&gt;

&lt;p&gt;In short, if your storage needs will fit within Aiven's default data tiers for your service level, it may well be worth the spend. But that value quickly decreases as your storage needs increase. If you expect large data growth, you may either want to consider hosting on AWS directly from the start, or ensuring you have a plan to migrate from Aiven to direct AWS hosting as your needs change. &lt;/p&gt;

&lt;h3&gt;
  
  
  Backup Storage Costs
&lt;/h3&gt;

&lt;p&gt;One point we didn't address in the above is the cost of backup storage. On Starter plans, Aiven gives you two days of backups (14 days for Business plans). By contrast, you only get one included backup when you host directly on AWS. &lt;/p&gt;

&lt;p&gt;However, AWS backup storage costs are (as of this writing) a scant US $0.095 per GiB. So, even in the case of the Starter-4 plan, adding a second backup on AWS only costs an additional $7.60 a month for 80 GB on AWS. Therefore, backups shouldn't be much of a factor in your cost calculations. &lt;/p&gt;

&lt;h2&gt;
  
  
  TinyStacks and Aiven
&lt;/h2&gt;

&lt;p&gt;Like the folks at Aiven, we here at TinyStacks also think the cloud is too complicated! That's why we've built a service that provides full DevOps deployment pipeline automation. (You can  &lt;a href="https://www.youtube.com/watch?v=22n1ac7T6so" rel="noopener noreferrer"&gt;see it in action on our YouTube channel&lt;/a&gt;!) We also include the ability to create an RDS PostgreSQL database - or use any other existing Amazon RDS instance - as part of each stack. &lt;/p&gt;

&lt;p&gt;If you need to pursue a multi-cloud strategy, or use another data service outside of Amazon RDS, you can use any of your Aiven-hosted services easily from TinyStacks. Just pass the information for your Aiven resource - such as DNS name, port, and credentials - into your TinyStacks-hosted Docker app. Your application can read these secrets and connect to your Aiven data assets as it would any other data storage resource. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Aiven is an advanced and easy to use interface to various cloud data services. Whether it's worth the premium, however, depends on your use case. For multi-cloud deployments and teams without a data expert, Aiven can be a wise investment. However, AWS-only shops with high data storage needs will want to weigh their usage carefully before deciding whether that investment will yield dividends. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Banner photo by &lt;a href="https://unsplash.com/@benjaminlehman?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;benjamin lehman&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/data-storage?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What Does a DevOps Engineer Do?</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Thu, 06 Jan 2022 21:41:43 +0000</pubDate>
      <link>https://dev.to/tinystacks/what-does-a-devops-engineer-do-5795</link>
      <guid>https://dev.to/tinystacks/what-does-a-devops-engineer-do-5795</guid>
      <description>&lt;p&gt;Hiring a DevOps Engineer for the first time? Knowing what to look for in a talented engineer can be a challenge. In this article, I discuss what you can expect from a DevOps Engineer in today's marketplace. I share some of my own experiences hiring DevOps Engineers in today's competitive labor market. Finally, I talk about cheaper alternatives to hiring a full-time DevOps Engineer. &lt;/p&gt;

&lt;h2&gt;
  
  
  When Do You Need a DevOps Engineer?
&lt;/h2&gt;

&lt;p&gt;In my past articles,  &lt;a href="https://blog.tinystacks.com/stacks-stages-environments-definitions"&gt;I've discussed DevOps release pipelines, stacks, and stages in-depth&lt;/a&gt;. A release pipeline is a software-driven process that development teams use to promote application changes from development into production. The pipeline creates multiple stacks - full versions of your application - across multiple stages of deployment. &lt;/p&gt;

&lt;p&gt;A development team usually starts a pipeline automatically via a push to a source code control system, such as Git. The team then pushes the change set gradually through each stage (dev-&amp;gt;test-&amp;gt;staging-&amp;gt;prod), testing and validating their changes along the way. &lt;/p&gt;

&lt;p&gt;What I haven't discussed (directly, at least) is how &lt;em&gt;complicated&lt;/em&gt; this process is. A DevOps release pipeline is itself a piece of software. It requires code to run - and that code needs to be tested, debugged, and maintained. &lt;/p&gt;

&lt;p&gt;Many teams and small development shops get started without a dedicated DevOps engineer. Yours may be one of them! In these situations, a few team members generally own pieces of the pipeline and keep it running. Pipelines at this point are usually a mix of automated promotion and old-school manual deployment. &lt;/p&gt;

&lt;p&gt;However, as your application and requests from your customers grow, you may realize the lack of a dedicated DevOps engineer is slowing your team down. Some of the signs include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your team's velocity slows under the weight of its current (mostly manual) deployment processes. &lt;/li&gt;
&lt;li&gt;You have a somewhat automated deployment process but maintaining it is consuming more and more of the team's time. &lt;/li&gt;
&lt;li&gt;You realize after a high-profile failure that your release procedures need professional help. &lt;/li&gt;
&lt;li&gt;You know you should improve your deployment process but your team is so crushed with feature work that no one has time to spend on it. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're facing down one or more of these issues, it may be time to hire a part-time or full-time DevOps Engineer. &lt;/p&gt;

&lt;h2&gt;
  
  
  Responsibilities of a DevOps Engineer
&lt;/h2&gt;

&lt;p&gt;A DevOps Engineer's role will likely look slightly different at every company. However, the following broad-based responsibilities tend to be common and consistent. &lt;/p&gt;

&lt;h3&gt;
  
  
  Automate the Full Release Pipeline
&lt;/h3&gt;

&lt;p&gt;A good release pipeline eliminates unnecessary manual steps and reduces the time required to deploy changes to your application. Building and maintaining this pipeline is the DevOps Engineer's primary job. &lt;/p&gt;

&lt;p&gt;DevOps Engineers usually craft release pipelines using  &lt;a href="https://blog.tinystacks.com/using-codebuild-and-codepipeline-to-deploy-aws-applications-easily"&gt;a Continuous Integration/Continuous Development tool&lt;/a&gt;. Tools  &lt;a href="https://www.katalon.com/resources-center/blog/ci-cd-tools/"&gt;such as Jenkins, Atlassian, GitLab, and Azure DevOps&lt;/a&gt; integrate with source code control tools (usually Git) and handle triggering automated actions in response to repository check-ins. If your team already uses such a tool and is committed to it, you'll want to find someone proficient in your specific CI/CD toolset. &lt;/p&gt;

&lt;p&gt;Many CI/CD toolsets offer a set of predefined actions to assist with the CI/CD process. However, other actions will be specific to your team's application. A DevOps engineer uses one or more scripting languages to automate complicated deployment tasks your team may have been executing manually. Python, JavaScript, shell scripting, and PowerShell (on Windows)  &lt;a href="https://www.devopsuniversity.org/what-programming-languages-are-used-by-a-devops-engineer/"&gt;are some of the more popular scripting languages that DevOps Engineers use&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For cloud-deployed software, a DevOps Engineer is also responsible for setting up the entire stack on which the application runs using  &lt;a href="https://blog.tinystacks.com/stacks-stages-aws"&gt;Infrastructure as Code&lt;/a&gt;. A DevOps Engineer should be able to design and implement a stack deployment that can be deployed multiple times to any stage of your release pipeline. &lt;/p&gt;

&lt;p&gt;Some engineers implement Infrastructure as Code using a scripting language such as Python. However, it's more common to use a templating language, such as  &lt;a href="https://aws.amazon.com/cloudformation/"&gt;CloudFormation on AWS&lt;/a&gt; or  &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview"&gt;Azure Resource Manager (ARM) Templates&lt;/a&gt; on Azure. &lt;/p&gt;

&lt;h3&gt;
  
  
  Setting Best Practices for Software Development
&lt;/h3&gt;

&lt;p&gt;As part of setting up the build and release pipeline, your DevOps guru will also define best practices for coding and validation of changes. In other words, they're the point person for your team's  &lt;a href="https://blog.tinystacks.com/pipeline-approvals-manual-automatic"&gt;change management approval process&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;For example, a DevOps Engineer may work with their team to devise the best way to manage the overall work process. For most teams, this usually means adopting an Agile approach to software development  &lt;a href="https://www.planview.com/resources/guide/introduction-to-kanban/kanban-vs-scrum/"&gt;such as Scrum or Kanban&lt;/a&gt;. It could also mean defining a code review process and teaching the team how to conduct good reviews. &lt;/p&gt;

&lt;h3&gt;
  
  
  Monitor Builds and Deployments
&lt;/h3&gt;

&lt;p&gt;The DevOps Engineer is responsible for ensuring the continued health of the team's CI/CD pipeline. This includes: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitoring build progress and logs from your team's CI/CD tool&lt;/li&gt;
&lt;li&gt;Moving quickly to resolve broken builds and keep changes flowing through the pipeline&lt;/li&gt;
&lt;li&gt;Observing dashboard metrics as new instances of the application come online&lt;/li&gt;
&lt;li&gt;Staying alert for errors as your deployment shifts more users over to the new version of your application&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Monitoring should occur in all stages of the pipeline.  &lt;a href="https://www.atlassian.com/devops/devops-tools/devops-monitoring"&gt;As Atlassian points out&lt;/a&gt;, pre-production monitoring means you can stomp out critical errors before they ever reach customers. &lt;/p&gt;

&lt;p&gt;Depending on the size of your organization, the DevOps Engineer may supervise all of this themselves. They may also work in conjunction with a Sustained Engineering or Support team that's ultimately responsible for maintaining application health. In either case, your DevOps Engineer should take the lead in defining what the team needs to monitor. &lt;/p&gt;

&lt;h3&gt;
  
  
  Be the Git Guru
&lt;/h3&gt;

&lt;p&gt;Ahhh, Git. The free source code control system is a marvelous invention. You can't be a developer nowadays and not know at least the basics of Git. And yet even seasoned developers will sometimes find themselves mired in Merge Conflict Hell. &lt;/p&gt;

&lt;p&gt;A team's DevOps Engineer should know Git inside and out. They should understand, for example,  &lt;a href="https://www.atlassian.com/git/tutorials/merging-vs-rebasing"&gt;the difference between a merge and a rebase&lt;/a&gt; - and which one to use when. They are the person primarily responsible for defining the team's branching and merging strategy - and maintaining quality internal documentation for other team members.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Look for in a DevOps Engineer
&lt;/h2&gt;

&lt;p&gt;As an engineering manager, I've hired multiple DevOps engineers. During the interview process, my loops focus on validation a combination of technical and soft skills: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DevOps knowledge&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Does the candidate have the basics of CI/CD down pat? What successes have they accumulated in developing successful pipelines? What setbacks have they encountered - and how have they overcome them? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud platform and DevOps tools&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In what DevOps tools is your candidate most experienced? Do they know the tools your team is already using? &lt;/p&gt;

&lt;p&gt;A DevOps Engineer will also need to make numerous decisions on whether to buy or build certain parts of the DevOps process. For example, does your team roll its own artifact storage features? Or does it leverage a tool like  &lt;a href="https://jfrog.com/artifactory/"&gt;Artifactory&lt;/a&gt;? DevOps Engineers need to remain up to speed on the tools marketplace so they can make these critical buy vs. build decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leadership&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A DevOps Engineer needs to do more than build a pipeline. They need to convince a (sometimes reluctant) team of engineers and stakeholders to change the way they develop software. Does your candidate have experience talking a tough crowd into adopting new processes? &lt;/p&gt;

&lt;p&gt;As a manager, I like to use  &lt;a href="https://www.indeed.com/career-advice/interviewing/how-to-use-the-star-interview-response-technique"&gt;STAR (Situation-Task-Action-Result) questions&lt;/a&gt; to determine a candidate's experience with being a technical leader. So I might ask something like, "Tell me about a time when you received pushback from your team on a process change. What was it and how did you resolve it?" &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Growth mindset&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The DevOps and cloud spaces are changing constantly. So it's important that a DevOps Engineer not get overly set in their ways. &lt;/p&gt;

&lt;p&gt;I also like to use STAR questions to gauge a candidate's willingness to grow. For example, what's the last thing that they learned just because it looked interesting? Did they end up using it on the job? If so, what was the result? &lt;/p&gt;

&lt;p&gt;Alternatively, I may ask when was the last time they received critical feedback from their manager. What was it? And how did they use that feedback to improve their job performance? &lt;/p&gt;

&lt;h2&gt;
  
  
  Alternatives to Hiring a Full-Time DevOps Engineer
&lt;/h2&gt;

&lt;p&gt;You've determined that you need more DevOps savvy in your org. But that doesn't mean you need to start off with a full-time position out of the gate. Maybe you can't afford a full-time position at the moment. Or perhaps you'd just like to test the waters before diving in with both feet. &lt;/p&gt;

&lt;p&gt;Fortunately, there are a couple of alternatives to hiring someone full-time. &lt;/p&gt;

&lt;h3&gt;
  
  
  Hire a Part-Time DevOps Engineer
&lt;/h3&gt;

&lt;p&gt;You may not need (nor even desire) a full-time team member. It may be enough to hire someone on a part-time basis to construct and maintain your build and release pipeline.&lt;/p&gt;

&lt;p&gt;In this scenario, you'd want to find a DevOps Engineer who's good at building self-service solutions. Your team should be able to kick off builds, perform releases, and monitor rollouts without having a full-time DevOps Engineer on call to oversee a successful outcome.&lt;/p&gt;

&lt;h3&gt;
  
  
  Migrate to TinyStacks
&lt;/h3&gt;

&lt;p&gt;Another option? Forego the engineer! You can potentially save both time and money by adopting a DevOps tool that essentially provides you "DevOps as a service". &lt;/p&gt;

&lt;p&gt;TinyStacks is one such tool. Built by a team with deep experience building out the Amazon Web Services console, TinyStacks provides an automated approach to DevOps. Using a simple UI Web interface, your team can migrate its application into a full release pipeline - complete with AWS cloud architecture - within the week. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.tinystacks.com/"&gt;Read a little more on what TinyStacks can do for you&lt;/a&gt;  and contact us below to start a discussion!&lt;/p&gt;

&lt;p&gt;Article by Jay Allen&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Approval Workflow: Manual and Automated Approvals in CI/CD</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Tue, 28 Dec 2021 20:11:50 +0000</pubDate>
      <link>https://dev.to/tinystacks/approval-workflow-manual-and-automated-approvals-in-cicd-2p7m</link>
      <guid>https://dev.to/tinystacks/approval-workflow-manual-and-automated-approvals-in-cicd-2p7m</guid>
      <description>&lt;p&gt;Recently, I've gone into detail on  &lt;a href="https://blog.tinystacks.com/stacks-stages-aws" rel="noopener noreferrer"&gt;stacks and stages&lt;/a&gt;. I've also examined the importance of dev stacks for both teams and individual developers. Building on these topics, I wanted to talk today about approvals. &lt;/p&gt;

&lt;p&gt;How do you promote changes to your stacks to production? More importantly, how do you gate promotions to ensure quality code? I'll look at the two major approaches to approvals - manual vs. automatic - and when and how to use each approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Change Management Approval Process?
&lt;/h2&gt;

&lt;p&gt;Traditionally, nothing strikes more fear in the heart of a dev team than pushing a change to production. Change promotion is usually an "all hands on deck" affair. Engineers and support personnel often stand at the ready, testing lives sites and monitoring dashboards for the slightest hint of trouble.&lt;/p&gt;

&lt;p&gt;What can go wrong when pushing a change? &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A code change that wasn't thoroughly tested or reviewed can break on release. &lt;/li&gt;
&lt;li&gt;A code change that worked in dev might not work in production. &lt;/li&gt;
&lt;li&gt;A configuration change could break a production server or not be distributed to all instances in a cluster. &lt;/li&gt;
&lt;li&gt;A new part of your cloud infrastructure could fail to deploy correctly. &lt;/li&gt;
&lt;li&gt;...and any number of other things that keep developers awake at night. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The question isn't "What can go wrong?" during a deployment. It's more like, "What &lt;strong&gt;can't&lt;/strong&gt; go wrong?"&lt;/p&gt;

&lt;p&gt;Because of this, software teams don't just shove a change into production and hope for the best. Most teams have some sort of &lt;strong&gt;change management approval process&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;As I discussed previously, a deployment pipeline consists of a number of stages. Each stage - dev, test, staging, prod - is used to widen a change's availability and vet its quality. A change management approval process sets guidelines for when a change can flow from one stage to the next. &lt;/p&gt;

&lt;h2&gt;
  
  
  Types of Approvals
&lt;/h2&gt;

&lt;p&gt;Traditionally, there are two types of approvals. Often, both types are used at different stages of the development process. &lt;/p&gt;

&lt;h3&gt;
  
  
  Manual Approvals
&lt;/h3&gt;

&lt;p&gt;With a manual approval, a change requires some sort of human intervention to progress to the next stage. Often, this takes the form of a code review or buddy test, in which another member of your team reviews your changes before approving them. Once approved, the change migrates to the next stage.&lt;/p&gt;

&lt;p&gt;A manual approval is also a good way to await feedback from stakeholders and customers. For example, you may make changes or a new feature available in a staging or demo environment that internal stakeholders and other teams can access. Once the changes have passed all tests and have secured stakeholder approval, you can approve and push them into production. &lt;/p&gt;

&lt;p&gt;Manual approval doesn't mean that your release pipeline contains zero automation. You will likely still have steps in your deployment pipeline where you're running unit tests, smoke tests, service health checks, and other automated quality checks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated Approvals
&lt;/h3&gt;

&lt;p&gt;With an automated approval, a change migrates to the next stage if it passes a set of automated checks. These can include but are not limited to unit tests, service health checks, and security checks. &lt;/p&gt;

&lt;p&gt;Automated approvals are typical in earlier stages of a release pipeline - e.g., moving from dev to test, or test to stage. They're harder to achieve in production, as they require a high degree of automated testing and verification to ensure users don't get broken bits. Automated delivery into production is often referred to as  &lt;a href="https://aws.amazon.com/builders-library/automating-safe-hands-off-deployments/" rel="noopener noreferrer"&gt;continuous delivery&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Typically, an automated approval into production will use some sort of phased release strategy. For example, you may deploy code changes to a single server (a canary). You would then test/monitor the results before deploying to all machines in a fleet. Or you may do a rolling deployment in which you deploy new code to a small percentage of your servers or serverless endpoints. If the change doesn't produce any errors (HTTP server errors, virtual machine connectivity issues, etc.), the system continues the promotion process. If there are errors, it rolls back the changes and stops the rollout.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing Manual Approvals on a Pipeline in AWS
&lt;/h2&gt;

&lt;p&gt;Most pipeline technologies provide some way to switch easily between manual and automated approvals. &lt;/p&gt;

&lt;p&gt;For example, &lt;a href="https://blog.tinystacks.com/using-codebuild-and-codepipeline-to-deploy-aws-applications-easily" rel="noopener noreferrer"&gt;AWS CodePipeline&lt;/a&gt; structures a pipeline in a series of stages. Each step consists of a series of actions. In AWS, you can add a Manual Approval action to a stage. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1640036288779%2FYfhSAMFou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1640036288779%2FYfhSAMFou.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The manual approval action will stop pipeline execution until someone approves it. AWS sends approval requests to an Amazon SNS (Simple Notification Service) topic. This means you can send the request to one or multiple potential reviewers. You can also configure the message to include a URL link. This is helpful if your team uses a code review software system like &lt;a href="https://www.reviewboard.org/" rel="noopener noreferrer"&gt;Review Board&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  TinyStacks Makes Approvals Easy
&lt;/h2&gt;

&lt;p&gt;At TinyStacks, our goal is to make DevOps easy. Our simplified pipeline creation tools will flow approvals automatically from stage to stage. Adding a manual approval is as simple as clicking a checkbox! Your teammates can then easily view and approve the migration to the next stage from the TinyStacks dashboard. Contact us today to see how TinyStacks can simplify your journey to DevOps maturity!&lt;/p&gt;

&lt;p&gt;Article by Jay Allen&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Dev Environments: An Essential Tool for Software Quality</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Tue, 14 Dec 2021 19:39:39 +0000</pubDate>
      <link>https://dev.to/tinystacks/dev-environments-an-essential-tool-for-software-quality-gpd</link>
      <guid>https://dev.to/tinystacks/dev-environments-an-essential-tool-for-software-quality-gpd</guid>
      <description>&lt;p&gt;There are many steps on the road to DevOps maturity. Recently, I've been covering some of the most basic concepts,  &lt;a href="https://blog.tinystacks.com/stacks-stages-aws"&gt;such as stacks, stages, and Infrastructure as Code&lt;/a&gt;. Today. I'll stick to these foundational steps and talk about on-demand dev stacks. I'll focus on why dev stacks are perhaps the most important first step teams can take on their DevOps journey. &lt;/p&gt;

&lt;h2&gt;
  
  
  Your Application, On Demand
&lt;/h2&gt;

&lt;p&gt;First, let's recap some concepts from my last article. One of the great benefits of moving to a cloud platform like AWS is Infrastructure as Code. With Infrastructure as Code, you can spin up the architecture your application needs - network topology, Web servers, databases, file storage, load balancers, etc. - by programming it. &lt;/p&gt;

&lt;p&gt;Before Infrastructure as Code, standing up a new version of an app usually meant manually configuring and tending to every component of the system. It was tedious and error-prone. Defining your architecture in a programming language like Python or in a declarative language like  &lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS CloudFormation&lt;/a&gt; means you can deploy and re-deploy your application over and over, consistently and without fear. &lt;/p&gt;

&lt;h2&gt;
  
  
  Stacks, Stages, and Environments
&lt;/h2&gt;

&lt;p&gt;Before I dive in, let's get clear on our terminology.&lt;/p&gt;

&lt;p&gt;Using Infrastructure as Code, you can deploy a &lt;strong&gt;stack&lt;/strong&gt; - your application plus all its supporting infrastructure - quickly and easily. Once you can deploy a stack, you can deploy multiple stacks - &lt;strong&gt;stages&lt;/strong&gt; - for various purposes - e.g., a single stack for production, plus other stacks for development and testing.&lt;/p&gt;

&lt;p&gt;People in software development talk a lot about &lt;strong&gt;environments&lt;/strong&gt; - e.g., production environments vs. dev environments. In our view, "environment" encompasses a specific runtime for your application that may or may not be hosted in the cloud. For many development teams, dev environments reside on a developer's desktop or laptop. &lt;/p&gt;

&lt;h2&gt;
  
  
  Production Stacks and Dev Stacks
&lt;/h2&gt;

&lt;p&gt;Many teams are drawn to Infrastructure as Code to streamline their production deployments. And indeed, repeatable production employments can greatly enhance application quality. Standing up new production stacks opens the door to numerous advanced deployment strategies such as  &lt;a href="https://whatis.techtarget.com/definition/canary-canary-testing"&gt;canary testing&lt;/a&gt; and  &lt;a href="https://martinfowler.com/bliki/BlueGreenDeployment.html"&gt;blue/green deployments&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;But Infrastructure as Code can improve quality before your team even pushes to production. You can use the same code you use to stand up a production stacks to stand up development stacks as well!&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a Dev Stack?
&lt;/h2&gt;

&lt;p&gt;Personal dev environments are becoming increasingly standardized with tools such as  &lt;a href="https://www.gitpod.io/"&gt;Gitpod&lt;/a&gt; and  &lt;a href="https://github.com/features/codespaces"&gt;Codespaces&lt;/a&gt;. As your team moves more toward standing up stacks, the difference between personal dev environments and dev stacks starts to fade. &lt;/p&gt;

&lt;p&gt;Dev stacks allow development teams to test their changes end to end before they're ever pushed to production. Using Infrastructure as Code, teams are assured that what they're testing is (apart from a few small config changes) identical to what will run in production. &lt;/p&gt;

&lt;p&gt;Having a central dev stack for your team is great. However, giving developers their own fully deployed stacks makes it even easier to test changes before they ever hit main. &lt;/p&gt;

&lt;p&gt;With individual dev stacks, your developers can deploy individual changes faster. This leads to greater flexibility and reliability over grouping many changes together into a single deployment. In addition, when building on cloud services, testing against a live service is better than attempting to replicate that service on a laptop.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Official" Dev Stage
&lt;/h2&gt;

&lt;p&gt;If your team hasn't started using dev stacks yet, the first step is to make a shared stack. This will be the start of your &lt;strong&gt;application pipeline&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;A pipeline is a series of stages through which you can push code changes, with each stage gradually widening your user base. On large and long-running projects, pipelines can involve multiple stages and become fairly complicated. However, a simple pipeline consisting of just a dev and a prod stage is a solid start for teams just dipping their toes into the DevOps waters. &lt;/p&gt;

&lt;p&gt;To create a dev stage, you first need to create a full application stack using a language such as AWS CloudFormation. Your stack should define everything that your application needs to run. &lt;/p&gt;

&lt;p&gt;If you already have this for your production stack, then you're almost there! You may need to make a few adjustments based on how you want to launch your dev stack. You have a couple of choices here. &lt;/p&gt;

&lt;h3&gt;
  
  
  Launch in Same AWS Account as Prod
&lt;/h3&gt;

&lt;p&gt;The simplest strategy is launching your dev stack in the same stack as your prod account. To do this, you'll need to parameterize your Infrastructure as Code deployment so that it uses different prefixes or suffixes for resource names. This will avoid naming collisions with your prod stack. &lt;/p&gt;

&lt;p&gt;AWS CloudFormation makes this easy through the use of parameters. And actually, you don't even need to define your own parameters! You can use  &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html"&gt;CloudFormation's pseudo-parameters&lt;/a&gt; - predefined metadata parameters - to implement this quickly and easily. &lt;/p&gt;

&lt;p&gt;For example, assume you are defining an S3 bucket name and want to make sure it's distinct from your production bucket. Using CloudFormation, you can use the name of your stack as a prefix for the bucket name. In the example below (YAML), we use a regular CloudFormation stack parameter, AppVersion, and the full stack name as a pseudo-parameter to construct a unique name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BucketName: !Sub "{$AWS:StackName}-${AppVersion}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Launch in Separate AWS Account
&lt;/h3&gt;

&lt;p&gt;However, it's not a great idea to mix stacks in a single account. Ideally, you want your production stack hosted in its own AWS account. This allows you to place additional restrictions on access to production. Such restrictions are almost a necessity if your team handles personally identifiable information on customers in prod. &lt;/p&gt;

&lt;p&gt;If you launch your dev stack in a separate account, you don't need to worry about name conflicts. The only thing you should have to parameterize in this context are publicly facing values, such as your application's DNS endpoint. &lt;/p&gt;

&lt;h2&gt;
  
  
  A Dev Stack Per Developer
&lt;/h2&gt;

&lt;p&gt;Creating a central dev stack is definitely a huge step forward. However, there's still room for improvement! &lt;/p&gt;

&lt;p&gt;A central dev stack is fine for integrating changes that are getting close to production quality. Ideally, however, you want devs to be able to test in their own stacks before committing to a common Git branch. This reduces merge conflicts and helps ensure high-quality code early in the development process. &lt;/p&gt;

&lt;p&gt;If you already have code for launching a dev stack, launching individual dev stacks for developers shouldn't involve much additional work. The major issue is tracking stacks and controlling costs. Giving your entire dev team unfettered access to an AWS account - even a non-production one - can leave you scrambling to control your cloud spend. &lt;/p&gt;

&lt;p&gt;One approach is to use  &lt;a href="https://aws.amazon.com/controltower/"&gt;AWS Control Tower&lt;/a&gt;. Control Tower works in conjunction with  &lt;a href="https://aws.amazon.com/organizations/"&gt;AWS Organizations&lt;/a&gt;, which enables the creation and management of multiple AWS accounts under a single master account. You can use Control Tower in conjunction with  &lt;a href="https://aws.amazon.com/servicecatalog/"&gt;AWS Service Catalog&lt;/a&gt; to offer your dev stack as a service catalog offering that developers can install into their accounts. You can even go one step farther and deploy the stack automatically as part of the account vending process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Defining Your Branching Strategy with Dev Stacks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://blog.tinystacks.com/stacks-stages-aws"&gt;As I discussed in my last article&lt;/a&gt;, it's important when creating your CI/CD pipeline to work out a branching strategy. One of the simplest strategies is to use  &lt;a href="https://www.atlassian.com/git/tutorials/comparing-workflows/feature-branch-workflow"&gt;feature branches&lt;/a&gt; for development work. In feature branching, devs create a branch per feature. Developers use pull requests to request integration of their work into main.&lt;/p&gt;

&lt;p&gt;Feature branching has several benefits. By using pull requests, other team members can review and vet a set of changes before they are integrated into the main branch. The entire process keeps your project's main branch clean and in a buildable, deployable state. &lt;/p&gt;

&lt;p&gt;Whatever branching strategy you choose, there's little doubt that giving developers their own fully deployed stacks makes it easier to test changes before they ever hit main. The result is faster deployments and more reliable code. &lt;/p&gt;

&lt;h2&gt;
  
  
  TinyStacks Makes Dev Stacks Easy
&lt;/h2&gt;

&lt;p&gt;Here at TinyStacks, we’re all about helping you deploy and manage your stacks in the cloud. We make it easy to transfer from a personal dev environment on your laptop into a development stage with a stack consistent with your production stack. Contact us today to find out more!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Guide to Stacks and Stages on AWS</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Tue, 23 Nov 2021 22:25:53 +0000</pubDate>
      <link>https://dev.to/tinystacks/a-guide-to-stacks-and-stages-on-aws-44fh</link>
      <guid>https://dev.to/tinystacks/a-guide-to-stacks-and-stages-on-aws-44fh</guid>
      <description>&lt;p&gt;Article by Jay Allen&lt;/p&gt;

&lt;p&gt;Learning AWS is complicated enough. But learning AWS is made more challenging when you're also still grappling with some of the major concepts of DevOps software deployments. In this article, I discuss two key concepts: stacks and stages. I also address how you can manage stacks and stages in AWS, along with other factors you need to consider when managing them in practice. &lt;/p&gt;

&lt;h2&gt;
  
  
  Stacks
&lt;/h2&gt;

&lt;p&gt;In simplest terms, a &lt;strong&gt;stack&lt;/strong&gt; is a unit of application deployment. Using stacks, developers can organize all of the resources required by an application or an application component as a single unit. This enables devs to deploy, tear down, and re-deploy their applications at will. &lt;/p&gt;

&lt;p&gt;Stacks can be stood up manually. However, it's better on cloud platforms to program the creation of your stack - e.g., using a scripting language such as Python. The ability to script stack deployments is known as Infrastructure as Code and is a hallmark of cloud computing platforms. Scripting your application deployments and bundling them into stacks reaps multiple benefits: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once your stack code is fully debugged, you can deploy your application repeatably and reliably. Scripting stack deployments eliminates the errors that inevitably occur in manual deployments. &lt;/li&gt;
&lt;li&gt;You can tear down stacks that aren't being used with a single script or command. This saves your team and company money. &lt;/li&gt;
&lt;li&gt;You can parameterize stacks to deploy different resources or use different configuration values. This lets you deploy multiple versions of your application. (Remember this - it'll be important soon!) &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Stages
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;stage&lt;/strong&gt;, by contrast, is a deployment of your application for a particular purpose. With stages, you can deploy your application multiple times to vet its functionality with an increasingly larger number of users. Typical stages can include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dev for developer coding and experimentation (only available to your dev team)&lt;/li&gt;
&lt;li&gt;Test for running unit tests (available to dev, test, and internal stakeholders)&lt;/li&gt;
&lt;li&gt;Stage for user acceptance testing (available to external alpha/beta testers) &lt;/li&gt;
&lt;li&gt;Prod for your publicly facing application (available to all customers) &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stages are part of  &lt;a href="https://blog.tinystacks.com/using-codebuild-and-codepipeline-to-deploy-aws-applications-easily" rel="noopener noreferrer"&gt;CI/CD pipelines, which I've discussed in detail before&lt;/a&gt;. By constructing your application as a pipeline, you can "flow" app changes from one stage to the next as you test them in each environment. This lets you vet changes multiple times in limited, controlled environments before releasing them to your users. &lt;/p&gt;

&lt;h2&gt;
  
  
  Stacks and Stages: Better Together
&lt;/h2&gt;

&lt;p&gt;Stacks and stages are a powerful one-two combination. With a properly parameterized stack, you can create whatever stages your application needs. Because you create each stage using the same source code, each stage's stack will contain the same resources and perform the same way as every other stage. &lt;/p&gt;

&lt;h2&gt;
  
  
  Stacks on AWS
&lt;/h2&gt;

&lt;p&gt;AWS fully embraces Infrastructure as Code. Nearly anything you can accomplish manually with the AWS Management Console can also be created programmatically. &lt;/p&gt;

&lt;p&gt;On AWS, you have several options for creating stacks. &lt;/p&gt;

&lt;h3&gt;
  
  
  AWS CloudFormation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;AWS CloudFormation&lt;/a&gt;  is the official "AWS way" of creating stacks. Using CloudFormation, you can write templates using either JSON or YAML that specify which AWS resources your stack contains. &lt;/p&gt;

&lt;p&gt;CloudFormation isn't an imperative programming language like Python. Instead, it uses a declarative format for creating resources. This simplifies creating your infrastructure, as you don't need to be an expert in a particular programming language to stand up resources. Many CloudFormation templates can be constructed by making small tweaks to publicly available templates. ( &lt;a href="https://aws.amazon.com/cloudformation/resources/templates/" rel="noopener noreferrer"&gt;AWS itself hosts many such sample templates and snippets&lt;/a&gt; .)&lt;/p&gt;

&lt;p&gt;A key feature of CloudFormation is its support for parameters. Rather than hard-code values, you can declare them as parameters and supply them at run time when you create the stack in AWS. For example, the template snippet below (taken from  &lt;a href="https://s3.us-west-2.amazonaws.com/cloudformation-templates-us-west-2/EC2InstanceWithSecurityGroupSample.template" rel="noopener noreferrer"&gt;AWS's sample template for deploying Amazon EC2 instances&lt;/a&gt;) defines the parameters &lt;strong&gt;KeyPair&lt;/strong&gt;, &lt;strong&gt;InstanceType&lt;/strong&gt;, and &lt;strong&gt;SSHLocation&lt;/strong&gt;. By parameterizing these values, the same template can be used multiple times to create different EC2 instances of different sizes, in different networks, and with different security credentials. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1637595094509%2FdlsM-KV5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1637595094509%2FdlsM-KV5d.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The great thing about CloudFormation templates is that they make stacks both easy to turn on and easy to turn off. Deleting an instance of a CloudFormation template automatically cleans up the entire stack and deactivates all of its resources. &lt;/p&gt;

&lt;h3&gt;
  
  
  Your Favorite Programming Language
&lt;/h3&gt;

&lt;p&gt;Not everyone wants to learn a new declarative language to create stacks. And some stacks might require the fine-grained control that an  &lt;a href="https://stackoverflow.com/questions/1784664/what-is-the-difference-between-declarative-and-imperative-paradigm-in-programmin" rel="noopener noreferrer"&gt;imperative programming language&lt;/a&gt; offers. &lt;/p&gt;

&lt;p&gt;Fortunately,  &lt;a href="https://aws.amazon.com/tools/" rel="noopener noreferrer"&gt;AWS also produces software development kits (SDKs)&lt;/a&gt; for a variety of languages. Developers can use Python Go, Node.js, .NET, and a variety of other languages to automate the creation and deletion of their stack. &lt;/p&gt;

&lt;h3&gt;
  
  
  Which is Better?
&lt;/h3&gt;

&lt;p&gt;CloudFormation's major advantage is simplicity. Particularly, CloudFormation makes deleting stacks a breeze. By contrast, with a programming language, you need to program the deletion of every resource. &lt;/p&gt;

&lt;p&gt;However, using a programming language for stack management offers much greater control than CloudFormation. For example, let's say that a resource fails to create. This can happen sometimes, not because you did anything wrong, but due to an underlying error in AWS, or a lack of available resources in your target region. &lt;/p&gt;

&lt;p&gt;Using CloudFormation, a failed resource will result in the stack stopping and everything you've created rolling back. Using a programming language, however, you could detect the failure and handle it more gracefully. For example, you may decide to retry the operation multiple times &lt;a href="https://aws.amazon.com/builders-library/timeouts-retries-and-backoff-with-jitter/" rel="noopener noreferrer"&gt;using incremental backoff&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Your choice between CloudFormation and programming language may also be affected by feature parity. In the past, some AWS teams have released features with SDK support but no initial CloudFormation support. &lt;/p&gt;

&lt;p&gt;Many of these issues with CloudFormation can be addressed using a hybrid CloudFormation/code approach.  &lt;a href="https://www.alexdebrie.com/posts/cloudformation-custom-resources/" rel="noopener noreferrer"&gt;Using CloudFormation custom resources&lt;/a&gt;, you can run code in AWS Lambda that orchestrates the creation of both AWS and non-AWS resources. You can also perform other programming-related tasks that might be required for your stack, such as database migration. &lt;/p&gt;

&lt;p&gt;In the end, both approaches work fine. My personal recommendation would be to use AWS CloudFormation in conjunction with custom resources when needed. CloudFormation is well-supported and can easily be leveraged by other AWS features (as we will see shortly). &lt;/p&gt;

&lt;h2&gt;
  
  
  Stages on AWS
&lt;/h2&gt;

&lt;p&gt;The easiest way to manage stages on AWS is by using  &lt;a href="https://aws.amazon.com/codepipeline/" rel="noopener noreferrer"&gt;AWS CodePipeline&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;CodePipeline performs two major services. First, it orchestrates multiple AWS services to automate every critical part of your application deployment process. Using CodePipeline, you can ingest code from your code repository (such as GitHub), compile it using AWS CodeBuild, and deploy your application's resources using (you guessed it) AWS CloudFormation. &lt;/p&gt;

&lt;p&gt;Second (and most important for today's discussion), CodePipeline supports defining separate stages for your application. When you create a CodePipeline, you create stages that handle importing your source code from source control and building the code. From there, you can add additional deployment stages for dev, test, stage, prod, etc. &lt;/p&gt;

&lt;p&gt;In the screenshot below, you can see a minimal deployment pipeline. The third step after the CodeBuild project is a dev stage, intended for developer vetting of new changes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1637599555799%2FftVSkJfMY.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1637599555799%2FftVSkJfMY.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We could easily add a new stage to our pipeline by clicking &lt;strong&gt;Edit&lt;/strong&gt; and then clicking the &lt;strong&gt;Add Stage&lt;/strong&gt; button. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1637600416975%2Fg3kGGs5RT.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1637600416975%2Fg3kGGs5RT.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you add a stage, you can add one or more &lt;strong&gt;action groups&lt;/strong&gt;. Action groups support a large number of AWS services, including AWS CloudFormation. For our test group, for example, we could add two action groups: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A manual approval. This would stop changes from the dev branch from flowing to test automatically until someone approved the change in the AWS Management Console (e.g., after performing a code review). &lt;/li&gt;
&lt;li&gt;An AWS CloudFormation template to deploy our infrastructure stack for the test stage. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When using a CloudFormation script with CodePipeline,  &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline-cfn-artifacts.html" rel="noopener noreferrer"&gt;you can specify a configuration file&lt;/a&gt; that passes in the parameters the CloudFormation script needs to build that stage properly. This might be as simple as prefixing created resources with the name "test" instead of "dev", or as complicated as specifying a data set to load into your database for testing. &lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Stacks and Stages in Practice
&lt;/h2&gt;

&lt;p&gt;In theory, stacks and stages are pretty simple concepts. In practice, however, it takes a lot of work and fine-tuning to get your CI/CD pipeline to the point where you can deploy your application reliably across multiple stages. Your team also needs to make some up-front decisions about how it's going to manage its source code and work product. &lt;/p&gt;

&lt;p&gt;Below are just a few factors to consider when devising your approach to stacks and stages on AWS. &lt;/p&gt;

&lt;h3&gt;
  
  
  Source Code Branching
&lt;/h3&gt;

&lt;p&gt;A key up-front decision with stacks and stages is how your team will flow changes from development into production. A big part of this decision is how you manage branches in source control. &lt;/p&gt;

&lt;p&gt;There are multiple possible branching patterns. On his Web site,  &lt;a href="https://martinfowler.com/articles/branching-patterns.html" rel="noopener noreferrer"&gt;programming patterns guru Martin Fowler has documented the key strategies&lt;/a&gt; in excruciating detail. On their Web site,  &lt;a href="https://docs.microsoft.com/en-us/azure/devops/repos/git/git-branching-guidance?view=azure-devops" rel="noopener noreferrer"&gt;Microsoft offers a simpler, more prescriptive approach&lt;/a&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define &lt;strong&gt;feature branches&lt;/strong&gt; that represent a single feature per branch. &lt;/li&gt;
&lt;li&gt;Use pull requests in source control to merge feature branches into your main branch for deployment. &lt;/li&gt;
&lt;li&gt;Keep your main branch clean and up to date. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is, of course, only one way to do things. The important thing is that your branching strategy is clean, simple, and easy to manage. Complex branching strategies that require multiple merges and resolution of merging conflicts end up becoming a nightmare for development teams and slow down deployment velocity. &lt;/p&gt;

&lt;h3&gt;
  
  
  Unit of Deployment
&lt;/h3&gt;

&lt;p&gt;Another fundamental consideration is the unit of deployment - i.e., how much of your application do you deploy at a time? &lt;/p&gt;

&lt;p&gt;Many legacy applications deploy an application's entire stack with every deployment. This so-called &lt;strong&gt;monolithic&lt;/strong&gt; architecture is easy to implement. However, it lacks flexibility and tends to result in hard-to-maintain systems. &lt;/p&gt;

&lt;p&gt;The popular alternative to monoliths is &lt;strong&gt;microservices&lt;/strong&gt;. In a microservices architecture, you break your application into a set of loosely-coupled services that your application calls. You can get incredible deployment flexibility with microservices, as you can bundle each service as its own stack. However,  &lt;a href="https://blog.tinystacks.com/service-discovery-with-aws-cloud-map" rel="noopener noreferrer"&gt;managing versions and service discovery&lt;/a&gt; in a complex Web of microservices can be daunting. &lt;/p&gt;

&lt;p&gt;You can also take an in-between approach. Some teams divide their apps up into so-called "macroservices" or "miniservices" - logical groupings of services and apps that can each be deployed as a single unit. Such deployments avoid the downsides of monolithic deployment while also steering clear of the complexity of microservices. &lt;/p&gt;

&lt;h3&gt;
  
  
  Data Management
&lt;/h3&gt;

&lt;p&gt;Next, there's how you'll manage data. At a minimum, your team needs to consider how to handle: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loading data into a dev/test/staging system for testing purposes. &lt;/li&gt;
&lt;li&gt;Managing schema changes to your data store (e.g., adding new tables/fields to relational database tables with a new release). &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some development frameworks, such as Django, use an Object Data Model (ODM) or Object Relational Model (ORM) that automates database migrations. In these cases, your application simply needs a way to trigger a migration using the relevant scripts. The AWS Database Blog has some detailed tips for &lt;a href="https://aws.amazon.com/blogs/database/building-a-cross-account-continuous-delivery-pipeline-for-database-migrations/" rel="noopener noreferrer"&gt;incorporating database migrations into a pipeline&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Managing Secrets
&lt;/h3&gt;

&lt;p&gt;While automation is great, it introduces a devilish problem: managing secrets. Your application can access most AWS services using an AWS Identity and Access Management (IAM) role. However, it will likely also need to connect to other resources - databases, source control systems, dependent services - that require some sort of authentication information, such as access and secret keys. &lt;/p&gt;

&lt;p&gt;It can't be said clearly enough:  &lt;a href="https://blog.gitguardian.com/secrets-credentials-api-git/" rel="noopener noreferrer"&gt;storing secrets in source code is a huge no-no&lt;/a&gt;. And storing them in plain text somewhere (like an Amazon S3 bucket) isn't any better. &lt;/p&gt;

&lt;p&gt;Fortunately, AWS created the  &lt;a href="https://aws.amazon.com/secrets-manager/" rel="noopener noreferrer"&gt;AWS Secrets Manager&lt;/a&gt; for just this purpose. Using Secrets Manager, you can authorize your application via IAM to read sensitive key/value pairs over a secure connection. You can even use CloudFormation to store secrets for resources such as databases into Secrets Manager as part of building a stack. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Stacks and stages are cornerstone concepts of DevOps deployments. Once you can deploy your application as a single unit or collection of units, you can spin up any environment you need at any time. The payoff? Faster deployments and more reliable applications - and, as a consequence, happy customers! &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Flask CRUD API</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Fri, 19 Nov 2021 14:39:17 +0000</pubDate>
      <link>https://dev.to/tinystacks/flask-crud-api-3pl2</link>
      <guid>https://dev.to/tinystacks/flask-crud-api-3pl2</guid>
      <description>&lt;p&gt;Welcome back on the Docker and AWS series by &lt;a href="https://www.tinystacks.com/"&gt;TinyStacks&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this Article, we will create a simple CRUD API using a Flask Application , Docker, Postgres&lt;/p&gt;

&lt;p&gt;Video Version:&lt;br&gt;
&lt;a href="https://youtu.be/QEaM4b3AliY"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k0VF1V_O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637327689060/5p6OFW87m.png" alt="image.png" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a folder&lt;/li&gt;
&lt;li&gt;Create &lt;code&gt;requirements.txt&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create app (~50 loc)&lt;/li&gt;
&lt;li&gt;Create &lt;code&gt;Dockerfile&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create &lt;code&gt;docker-compose.yml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run database&lt;/li&gt;
&lt;li&gt;Check database&lt;/li&gt;
&lt;li&gt;Run python app&lt;/li&gt;
&lt;li&gt;Check that the table has been created&lt;/li&gt;
&lt;li&gt;Test endpoints (Postman)&lt;/li&gt;
&lt;li&gt;Test Get All endpoint&lt;/li&gt;
&lt;li&gt;Create a record(x3)&lt;/li&gt;
&lt;li&gt;Get a record&lt;/li&gt;
&lt;li&gt;Update record&lt;/li&gt;
&lt;li&gt;Delete record&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Create folder and step into it
&lt;/h2&gt;

&lt;p&gt;You can create a folder in anyway that you prefer. If you use a terminal you can type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir  flask-crud-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, step into the folder:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd flask-crud-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, open this folder with your favorite IDE.  If you use Visual Studio Code, you can type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;code .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tuqa-cP5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637010432249/jgGtdkqO0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tuqa-cP5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637010432249/jgGtdkqO0.png" alt="image.png" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we’re ready to get started coding our Flask Crud API application with the help of GitHub Copilot!&lt;/p&gt;

&lt;h2&gt;
  
  
  Create requirements.txt
&lt;/h2&gt;

&lt;p&gt;First of all, we need to define the dependent Python libraries for our application. The standard method in Python is to create a &lt;code&gt;requirements.txt&lt;/code&gt; file and list  our dependencies there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---Y8vunNs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637010529696/ErUVIgItV.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---Y8vunNs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637010529696/ErUVIgItV.png" alt="image.png" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So create this file called "requirements.txt". (If you have the Material Icon Theme, it will show a nice little Python icon. It’s a nice to spot typos!)&lt;/p&gt;

&lt;p&gt;Then we can type the dependencies for our project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;flask
psycopg2-binary
Flask-SQLAlchemy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those dependencies are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;flask: The Python Framework&lt;/li&gt;
&lt;li&gt;psycopg2-binary: To create the connection with the Postgres Database
&lt;/li&gt;
&lt;li&gt;Flask-SQLAlchemy: Help generate SQL queries without writing them manually&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create app (~50 loc)
&lt;/h3&gt;

&lt;p&gt;At the root level, create a file called &lt;code&gt;app.py&lt;/code&gt;.  We will write our crud API app in about 50 lines of code!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GYOyIVdD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637010900333/8uDE_DLCm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GYOyIVdD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637010900333/8uDE_DLCm.png" alt="image.png" width="800" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s specify the libraries we’ll use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jsonify&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="nn"&gt;flask_sqlalchemy&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SQLAlchemy&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="nn"&gt;os&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, define the Flask app and how to run it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s"&gt;'__main__'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;debug&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Define an environment variable as a string and initialize the SQLAlchemy instance to handle the Postgres database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'SQLALCHEMY_DATABASE_URI'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'DATABASE_URL'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;SQLAlchemy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's define our data model. We’ll create a class named Item with just &lt;code&gt;title&lt;/code&gt; and &lt;code&gt;content&lt;/code&gt; as properties. We’ll also add an auto-incremental Integer named &lt;code&gt;id&lt;/code&gt;. This will act as the primary key for our table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Model&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Column&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Integer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;primary_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Column&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;unique&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nullable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Column&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;120&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;unique&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nullable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="bp"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="bp"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;
    &lt;span class="bp"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now a little bit of magic: with this line we let SQLAlchemy to synchronize with the Postgres database. This will create our databasetable automatically for us!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;create_all&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Define  REST endpoints
&lt;/h3&gt;

&lt;p&gt;Now we need to implement our CRUD endpoints. CRUD stands for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CREATE&lt;/li&gt;
&lt;li&gt;READ&lt;/li&gt;
&lt;li&gt;UPDATE&lt;/li&gt;
&lt;li&gt;DELETE&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the basic functions of every application.&lt;/p&gt;

&lt;p&gt;To retrieve a singleitem, we define this function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/items/&amp;lt;id&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'GET'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_item&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;del&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;__dict__&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'_sa_instance_state'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;jsonify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;__dict__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To get all the items in the database, we define this function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/items'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'GET'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_items&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
  &lt;span class="n"&gt;items&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
  &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nb"&gt;all&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;del&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;__dict__&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'_sa_instance_state'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;__dict__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;jsonify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To create a new item:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/items'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'POST'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;create_item&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
  &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'title'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'content'&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
  &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;"item created"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To update an existing item:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/items/&amp;lt;id&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'PUT'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;update_item&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;get_json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;filter_by&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'title'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'content'&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
  &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;"item updated"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To delete an existing item:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'/items/&amp;lt;id&amp;gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s"&gt;'DELETE'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;delete_item&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
  &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;filter_by&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s"&gt;"item deleted"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! and in less than 60 lines of coding (included new lines)!&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Dockerfile
&lt;/h3&gt;

&lt;p&gt;A Dockerfile is a text file to define a set of commands to create an image. Starting from this image, we will run our python containers&lt;/p&gt;

&lt;p&gt;Let's create a file called &lt;code&gt;Dockerfile&lt;/code&gt; (capital D, no extension).&lt;/p&gt;

&lt;p&gt;We could create of course a file with a different name.But this is the default one that Docker uses. If we use it, we don't have to specify a name for the file when we build our Docker container image.&lt;/p&gt;

&lt;p&gt;This is the final file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FROM python:3.6-slim-buster

COPY requirements.txt &lt;span class="nb"&gt;.&lt;/span&gt;

RUN pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

COPY &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;

EXPOSE 80

CMD &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"flask"&lt;/span&gt;, &lt;span class="s2"&gt;"run"&lt;/span&gt;, &lt;span class="s2"&gt;"--host=0.0.0.0"&lt;/span&gt;, &lt;span class="s2"&gt;"--port=80"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's explain briefly what's going on here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;FROM&lt;/code&gt;: Set the baseImage to use for subsequent instructions. FROM must be the first instruction in a Dockerfile.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COPY&lt;/code&gt;: Copy files or folders from source to the dest path in the image's filesystem. The first &lt;code&gt;COPY&lt;/code&gt; copies the requirements.txt file inside the filesystem of the image; the second one copies everything else. &lt;/li&gt;
&lt;li&gt;RUN&lt;code&gt;: Execute any commands on top of the current image as a new layer and commit the results. In this case, we are running&lt;/code&gt;pip` to install the Python libraries we need.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EXPOSE&lt;/code&gt;: Informs Docker of the port we will use at runtime. (PRO tip: this line is not really needed. It makes the intent of the Dockerfile clear and facilitates the translation to the docker-compose.yml file)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CMD&lt;/code&gt;: Provide defaults for an executing container. If an executable is not specified, then &lt;code&gt;ENTRYPOINT&lt;/code&gt; must be specified as well. There can only be one &lt;code&gt;CMD&lt;/code&gt; instruction in a Dockerfile.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create docker-compose.yml
&lt;/h3&gt;

&lt;p&gt;Now that we have created the Dockerfile, let's create the &lt;code&gt;docker-compose.yml&lt;/code&gt; file to make our life easier.&lt;/p&gt;

&lt;p&gt;This is the final file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`yml&lt;br&gt;
version: '3.9'&lt;/p&gt;

&lt;p&gt;services:&lt;br&gt;
  pythonapp:&lt;br&gt;
    container_name: pythonapp&lt;br&gt;
    image: pythonapp&lt;br&gt;
    build: .&lt;br&gt;
    ports:&lt;br&gt;
      - "80:80"&lt;br&gt;
    environment:&lt;br&gt;
      - DATABASE_URL=postgresql://postgres:postgres@db:5432/postgres&lt;br&gt;
    depends_on:&lt;br&gt;
      - db&lt;/p&gt;

&lt;p&gt;db:&lt;br&gt;
    container_name: db&lt;br&gt;
    image: postgres:12&lt;br&gt;
    ports:&lt;br&gt;
      - "5432:5432"&lt;br&gt;
    environment:&lt;br&gt;
      - POSTGRES_PASSWORD=postgres&lt;br&gt;
      - POSTGRES_USER=postgres&lt;br&gt;
      - POSTGRES_DB=postgres&lt;br&gt;
    volumes:&lt;br&gt;
      - pgdata:/var/lib/postgresql/data&lt;/p&gt;

&lt;p&gt;volumes:&lt;br&gt;
  pgdata: {}&lt;br&gt;
`&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let's explain what's happening line by line:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;version&lt;/code&gt;: '3.9' is the current version of the docker-compose.yml file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;services&lt;/code&gt;: The top-level entry of our &lt;code&gt;docker-compose.yml&lt;/code&gt; file. The services are basically the containers.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pythonapp&lt;/code&gt;: The Python application we just wrote&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;container_name&lt;/code&gt;: Defines a custom name for our application. It’s the equivalent of using the &lt;code&gt;--name&lt;/code&gt; option at the command line when we run &lt;code&gt;docker run&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;image&lt;/code&gt;: Rhe image for this service (container). Here, we are defining a custom name just to use it locally. If we want to push our containerto a public or private registry (a place to store Docker Images, e.g. Docker hub), we need to change the tag of the image (basically the name). We don’t need to do that now.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;build&lt;/code&gt;: We need this option if we are using our custom image and not an existing one. The dot after the semicolon is the path of the Dockerfile, and it means "here is where I’mrunning the &lt;code&gt;docker-compose.yml&lt;/code&gt; file". Please note that the &lt;code&gt;docker-compose.yml&lt;/code&gt; file and the &lt;code&gt;Dockerfile&lt;/code&gt; are at the same level.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ports&lt;/code&gt;: A list of ports we want to expose to the outside. A good practice is to make the content a quoted string.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;environment&lt;/code&gt;: Key-value pairs. Here, we use them to define our custom URL to connect to the Postgres database.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;depends_on&lt;/code&gt;: Express dependency between services. Service dependencies cause the following behaviors:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker-compose up&lt;/code&gt; starts services in dependency order. In the following example, db and redis are started before web.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker-compose up&lt;/code&gt; automatically includes a service’s dependencies. In the example below, &lt;code&gt;docker-compose up web&lt;/code&gt; also creates and starts &lt;code&gt;db&lt;/code&gt;and &lt;code&gt;redis&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;docker-compose stop&lt;/code&gt; stops services in dependency order. In the following example, &lt;code&gt;web&lt;/code&gt; is stopped before &lt;code&gt;db&lt;/code&gt; and &lt;code&gt;redis&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;db&lt;/code&gt;: Service for the Postgres database.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;container_name&lt;/code&gt;: The default name for this service, also called &lt;code&gt;db&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;image: postgres:12&lt;/code&gt; : We will not use our custom image in this case but an existing one, the one the Postgres team has created and pushed for us on Docker Hub. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ports&lt;/code&gt;: A list of ports we want to expose to the outside. A good practice is to wrap this content in a quoted string. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;environment&lt;/code&gt;: Here we define three environment variables for the Postgres service. The keys are &lt;strong&gt;not&lt;/strong&gt; arbitrary, but are the ones defined in the official Postgres image. We can, of course, define the values of these environment variables (this is why the Postgres team has given them to us, to use them!). &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;volumes&lt;/code&gt;: Here we use a named volume called pgdata. the part before the ':' is the name of the volume, and the part on the right of the ':' is the destination path&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the end of the file, we define the actual volume named &lt;code&gt;pgdata&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run the database service locally
&lt;/h2&gt;

&lt;p&gt;To run the database service locally, we can type:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;bash&lt;br&gt;
docker compose up -d db&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;the &lt;code&gt;-d&lt;/code&gt; option stands for &lt;code&gt;detached&lt;/code&gt;, to leave out terminal available after running this container.&lt;/p&gt;

&lt;p&gt;You can check the status of the running container by typing:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
docker ps&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--winiBetB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637013928114/EoG4cVYnU.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--winiBetB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637013928114/EoG4cVYnU.png" alt="image.png" width="800" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Check the database
&lt;/h3&gt;

&lt;p&gt;To step inside the Postgres container we will use 2 different approaches.&lt;/p&gt;

&lt;p&gt;First approach: directly from the Command line&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
docker exec -it db psql -U postgres&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H-pEfxy6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637014045457/-fnIlwiPL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H-pEfxy6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637014045457/-fnIlwiPL.png" alt="image.png" width="647" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But if we type &lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
\dt&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We will see this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;did not find any relations&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is correct, because we haven't run our Python container yet.&lt;/p&gt;

&lt;p&gt;To exit the psql process, type:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
exit&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Or alternatively, just&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
\q&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Run Python app
&lt;/h2&gt;

&lt;p&gt;To run your Python application, type:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
docker compose up --build pythonapp&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;--build&lt;/code&gt; option is to build your application before running it. It's not useful the first time you run this command because Docker is going to build the image anyway. It becomes useful after you run &lt;code&gt;docker compose up&lt;/code&gt; multiple times and you’ve made some changes on your app. &lt;/p&gt;

&lt;p&gt;Note: If you build an image using the same tag (name), the previous image will become a so-called "dangling image", with &lt;code&gt;&amp;lt;none&amp;gt; &amp;lt;none&amp;gt;&lt;/code&gt; as the repository and tag. To remove them, you can type &lt;code&gt;docker image prune&lt;/code&gt; and then &lt;code&gt;y&lt;/code&gt; to confirm.&lt;/p&gt;

&lt;p&gt;if you see something like this, you have successfully launched your Python Flask application:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xbvICQpg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637014793254/ivWAYWNP0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xbvICQpg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637014793254/ivWAYWNP0.png" alt="image.png" width="800" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can once again check the running containers:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
docker ps -a&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XEAhe7j0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637014876203/HIwEP-X5C.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XEAhe7j0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637014876203/HIwEP-X5C.png" alt="image.png" width="800" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(Side note: don't mind the "created" value, it's just me removing/stopping the containers for demo-purposes! You should see them both running with a status of &lt;code&gt;some minutes ago&lt;/code&gt;.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Check the table has been created
&lt;/h2&gt;

&lt;p&gt;If you step again inside the Postgres container now, using the command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
docker exec -it db psql -U postgres&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;and you type:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
\dt&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you will see the table has been created automatically, without calling any endpoint!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7ClJkB92--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015055167/07hpDberE.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7ClJkB92--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015055167/07hpDberE.png" alt="image.png" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This was possible because of the line (around 22) on the app.py file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;python&lt;br&gt;
db.create_all()&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Test endpoints (using Postman)
&lt;/h3&gt;

&lt;p&gt;Let's test this simple application! We will use Postman, but you can use any REST API testing tool that you prefer. &lt;/p&gt;

&lt;h3&gt;
  
  
  Get All
&lt;/h3&gt;

&lt;p&gt;Let's get all the items:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--okVPkzjl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015219195/wQ4KJaSYH.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--okVPkzjl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015219195/wQ4KJaSYH.png" alt="image.png" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create one (x3)
&lt;/h3&gt;

&lt;p&gt;Now let's create some new items&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4VnW7T_L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015240051/sSZR1Fejl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4VnW7T_L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015240051/sSZR1Fejl.png" alt="image.png" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JOSKTk8Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015814122/u777mydoK.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JOSKTk8Q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015814122/u777mydoK.png" alt="image.png" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U8q7zUA4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015831869/zVkDdi3gj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U8q7zUA4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015831869/zVkDdi3gj.png" alt="image.png" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Apologies for my lack of imagination :) &lt;/p&gt;

&lt;h3&gt;
  
  
  Get one
&lt;/h3&gt;

&lt;p&gt;To get a single item, you can just make a GET request at the endpoint &lt;code&gt;/item/&amp;lt;id&amp;gt;&lt;/code&gt;, where &lt;code&gt;&amp;lt;id&amp;gt;&lt;/code&gt; is the unique ID of the item that you previously created. &lt;/p&gt;

&lt;p&gt;For example, to get the item with id 2:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g4EXRJwF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015878799/hr5c1Jyo-.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g4EXRJwF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015878799/hr5c1Jyo-.png" alt="image.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(Please note that we are not handling errors correctly in this example. If that id doesn't exist, we’ll get an error directly from the application and we won’t show an error message to the end-user.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Update one
&lt;/h3&gt;

&lt;p&gt;To update an existing item, you can make a PUT request using the &lt;code&gt;&amp;lt;id&amp;gt;&lt;/code&gt; of the item in the body:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FOtafqsf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015924460/clErYQqml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FOtafqsf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015924460/clErYQqml.png" alt="image.png" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Delete one
&lt;/h3&gt;

&lt;p&gt;Finally, we can delete an existing item from the database. We can make a DELETE request and appending an existing &lt;code&gt;&amp;lt;id&amp;gt;&lt;/code&gt; at the end of the url:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M2zuqe-R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015962974/ipkCqs1rT.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M2zuqe-R--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637015962974/ipkCqs1rT.png" alt="image.png" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you get all the items again, this will be the result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8TglIUGW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637016179841/G7ArATt4f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8TglIUGW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637016179841/G7ArATt4f.png" alt="image.png" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Test the final status using the prompt
&lt;/h3&gt;

&lt;p&gt;You can test the final status also directly on the db:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
docker exec -it db psql -U postgres&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;and then run the psql command (don't forget the final ';'):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;code&gt;&lt;br&gt;
select * from item;&lt;br&gt;
&lt;/code&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yGg5agsb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637016059557/SJI7gwyec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yGg5agsb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637016059557/SJI7gwyec.png" alt="image.png" width="560" height="230"&gt;&lt;/a&gt;&lt;br&gt;
&lt;code&gt;&lt;/code&gt;`&lt;/p&gt;

&lt;p&gt;All the code is available at this url: &lt;a href="https://github.com/tinystacks/aws-docker-templates-flask/tree/flask-local-postgres"&gt;https://github.com/tinystacks/aws-docker-templates-flask/tree/flask-local-postgres&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Video Version:&lt;br&gt;
&lt;a href="https://youtu.be/QEaM4b3AliY"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k0VF1V_O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1637327689060/5p6OFW87m.png" alt="image.png" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Video offered by &lt;a href="https://www.tinystacks.com/"&gt;TinyStacks&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Migrating DigitalOcean database to AWS</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Thu, 14 Oct 2021 12:54:40 +0000</pubDate>
      <link>https://dev.to/tinystacks/migrating-digitalocean-database-to-aws-4fj8</link>
      <guid>https://dev.to/tinystacks/migrating-digitalocean-database-to-aws-4fj8</guid>
      <description>&lt;p&gt;Video Version: &lt;a href="https://youtu.be/3zLWCNn0Vqk" rel="noopener noreferrer"&gt;https://youtu.be/3zLWCNn0Vqk&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we’ll look at how to migrate an existing Postgres database on DigitalOcean created through their "managed database" function to a Relational Database Service (RDS) instance on AWS.&lt;/p&gt;

&lt;p&gt;For an introduction to RDS, you can read my previous article on &lt;a href="https://blog.tinystacks.com/migrate-local-database-on-docker-container-to-aws" rel="noopener noreferrer"&gt;migrating a local database to RDS&lt;/a&gt;, or &lt;a href="https://youtu.be/87G3iUl-tj0" rel="noopener noreferrer"&gt;watch the video&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managed database on DigitalOcean with some data in it&lt;/li&gt;
&lt;li&gt;AWS account&lt;/li&gt;
&lt;li&gt;(optional) TablePlus or any other tool to manage a Postgresql DB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Steps: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check DigitalOcean and download the connection certificate&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://tableplus.com/" rel="noopener noreferrer"&gt;TablePlus&lt;/a&gt; (GUI tool for managing relational databases)&lt;/li&gt;
&lt;li&gt;Check DigitalOcean database&lt;/li&gt;
&lt;li&gt;Create RDS instance&lt;/li&gt;
&lt;li&gt;Test empty RDS instance&lt;/li&gt;
&lt;li&gt;Backup DigitalOcean DB&lt;/li&gt;
&lt;li&gt;Restore RDS database&lt;/li&gt;
&lt;li&gt;Final Test&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Check DigitalOcean and Download CA certificate
&lt;/h2&gt;

&lt;p&gt;First, let's visit our DigitalOcean account’s Database page. We should see something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633850207069%2FbJ5X7ICQ0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633850207069%2FbJ5X7ICQ0.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Download the CA certificate locally. We need this because managed databases on DigitalOcean don't allow insecure connections.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633851982852%2FyF_CJYDBs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633851982852%2FyF_CJYDBs.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TablePlus
&lt;/h2&gt;

&lt;p&gt;To access the DB, we can use whatever tool we want (command line interface, Pgadmin, etc.). In this demo, we will use &lt;a href="https://tableplus.com/" rel="noopener noreferrer"&gt;TablePlus&lt;/a&gt; (available on Mac/Windows), so if you want to follow along exactly I suggest you download it. We’ll use the free version. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633852039662%2Fy2m5aTO47.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633852039662%2Fy2m5aTO47.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Check DigitalOcean Database
&lt;/h2&gt;

&lt;p&gt;Let's create a new connection on TablePlus:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633852186650%2FfqlnJcPa4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633852186650%2FfqlnJcPa4.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the details for your DigitalOcean database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host&lt;/li&gt;
&lt;li&gt;Port&lt;/li&gt;
&lt;li&gt;Username&lt;/li&gt;
&lt;li&gt;Password&lt;/li&gt;
&lt;li&gt;Database name&lt;/li&gt;
&lt;li&gt;SSL mode: REQUIRED&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember also to add the certificate we just downloaded.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633853428664%2FqxJUIfj-M.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633853428664%2FqxJUIfj-M.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Connect&lt;/strong&gt; and you will see the database with your data. In this case, we have just 2 tables and 3 inserts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633854766934%2Fp15NHXHXZ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633854766934%2Fp15NHXHXZ.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create RDS Instance
&lt;/h2&gt;

&lt;p&gt;Go on AWS Console and search "RDS"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633854992204%2Fu_6qS-0Ko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633854992204%2Fu_6qS-0Ko.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Create Database&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633855372747%2FW6VD6mhCQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633855372747%2FW6VD6mhCQ.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select &lt;strong&gt;Postgres&lt;/strong&gt; and &lt;strong&gt;version 12&lt;/strong&gt;, so we will have access to a free Tier (read the conditions before accepting).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633855452315%2FZ-MLOn7Rl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633855452315%2FZ-MLOn7Rl.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose a name for the database, as well as a username and password to access the DB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633856460380%2F7LNyYT_oh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633856460380%2F7LNyYT_oh.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make the instance accessible from the Internet:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633856712642%2F358bqiea2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633856712642%2F358bqiea2.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Double-check that this is a Free Tier (with limitations- please read them)Then, click &lt;strong&gt;Create Database&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;This will take a few minutes to complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633856792367%2Fc9c-1v6Jh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633856792367%2Fc9c-1v6Jh.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's check that our security group is configured correctly. Our machine should have access to the instance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633880449214%2FwAktrFH55j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633880449214%2FwAktrFH55j.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Specifically, check if the inbound rules are set properly. In our case, they’re as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633880804639%2FIX_-9ACpf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633880804639%2FIX_-9ACpf.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Test Empty RDS Instance
&lt;/h2&gt;

&lt;p&gt;Now let's test connecting to the RDS instance using TablePlus:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633859248130%2F_osJpwFrW.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633859248130%2F_osJpwFrW.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Connect&lt;/strong&gt;. As you can see, the DB is empty for now:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633859374216%2F4tX-PILYV.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633859374216%2F4tX-PILYV.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup DigitalOcean DB
&lt;/h2&gt;

&lt;p&gt;Now let’s use Tableplus to make a backup of the DigitalOcean database:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633873953447%2F2pWysYQln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633873953447%2F2pWysYQln.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose a folder and save the file called defaultdb.dump (it will have the name of your database):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633873793413%2FO_v_gbkEw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633873793413%2FO_v_gbkEw.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see this, it worked:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633874000509%2F9VXsAtPp8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633874000509%2F9VXsAtPp8.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Restore RDS Database
&lt;/h2&gt;

&lt;p&gt;To restore the database, click &lt;strong&gt;Restore&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633874050280%2FpOhriVRy-.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633874050280%2FpOhriVRy-.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the &lt;code&gt;aws&lt;/code&gt; database. Then, select the &lt;code&gt;postgres&lt;/code&gt; database and click &lt;strong&gt;Start restore&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633874100962%2FWsFln0TVx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633874100962%2FWsFln0TVx.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the dump file, in our case &lt;code&gt;defaultdb.dump&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633875156759%2FUEULTNmb6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633875156759%2FUEULTNmb6.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Test
&lt;/h3&gt;

&lt;p&gt;As a final test, let's access the RDS database again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633876573537%2FUPdSzUQ8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633876573537%2FUPdSzUQ8o.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here we can see our tables and inserts again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633876619400%2FqmtVNr-0O.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1633876619400%2FqmtVNr-0O.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And we’re done!&lt;/p&gt;

&lt;p&gt;Video Version: &lt;a href="https://youtu.be/3zLWCNn0Vqk" rel="noopener noreferrer"&gt;https://youtu.be/3zLWCNn0Vqk&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Choosing Between AWS Lambda and Docker</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Thu, 23 Sep 2021 19:10:05 +0000</pubDate>
      <link>https://dev.to/tinystacks/choosing-between-aws-lambda-and-docker-2e6k</link>
      <guid>https://dev.to/tinystacks/choosing-between-aws-lambda-and-docker-2e6k</guid>
      <description>&lt;p&gt;Article by Jay Allen&lt;/p&gt;

&lt;p&gt;One of the great things about AWS is the vast array of features available to software developers. Sadly, one of the most confusing things about AWS is...the vast array of features available to developers!&lt;/p&gt;

&lt;p&gt;AWS provides multiple methods for deploying applications into the cloud. Two of these methods - AWS Lambda and Docker - have grown rapidly in popularity over the past several years. In this article, we compare the benefits of each and discuss when you might want to choose one over the other. &lt;/p&gt;

&lt;h1&gt;
  
  
  AWS Lambda
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda&lt;/a&gt;  is a "serverless" service that enables running code in the cloud. With Lambda, application developers can package code written &lt;a href="https://aws.amazon.com/lambda/faqs/"&gt;in a variety of programming languages&lt;/a&gt;  - including Java, Go, C#, Python, Powershell, Node.js, and Ruby - into a callable function that complies with their language's Lambda interface. They can then upload these Lambda functions to their AWS accounts, where they can be executed from anywhere over the Internet. &lt;/p&gt;

&lt;p&gt;The word "serverless" is a bit of a misnomer here; obviously, AWS didn't find some magical way to run code without compute capacity! "Serverless" here means that the compute power used to run this code doesn't run in your AWS account. Rather, it's executed on one of a series of computing clusters run by AWS itself. This frees development teams up to focus on the business logic of their application rather than on managing compute capacity. &lt;/p&gt;

&lt;p&gt;Lambda functions  &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-invocation.html"&gt;can be called, or &lt;em&gt;invoked&lt;/em&gt;&lt;/a&gt;, through a variety of methods. One of the most common is by connecting your Lambda functions to AWS API Gateway, which exposes them as REST API calls. Lambda functions can also be used  &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventsourcemapping.html"&gt;to implement customization and back-end processing logic for a large number of AWS services&lt;/a&gt;, including Amazon DynamoDB, Amazon Kinesis, and Amazon Simple Queue Service, among others. Lambda functions may also execute as scheduled tasks, and can even be executed directly from the AWS Command Line Interface (CLI) and the AWS Console. &lt;/p&gt;

&lt;p&gt;AWS Lambda can be thought of as the original serverless technology on AWS. It wasn't the first serverless technology on the block.  &lt;a href="https://dashbird.io/blog/origin-of-serverless/"&gt;That honor may go to Google' App Engine&lt;/a&gt;, which has been doing its thing since 2008. (Lambda, first released in 2015, is comparatively a youngin'.) But it helped inspire a boom in the serverless technology industry that continues to this day. &lt;/p&gt;

&lt;h1&gt;
  
  
  Docker
&lt;/h1&gt;

&lt;p&gt;In the bad ol' days of software deployment, developers threw their code onto clusters of production servers that might all have wildly different configurations. A web application might work for one user and then fail for a second user if the server to which the request was routed lacked a certain shared library or configuration setting. &lt;/p&gt;

&lt;p&gt;Docker was created specifically to resolve this nightmare scenario. A Docker container is a unit of software that contains everything - code, dependent libraries, and configuration files - that an application requires to run. The container is then deployed to and run on a virtual machine. &lt;/p&gt;

&lt;p&gt;The utility of Docker containers lies in their "run once, run anywhere" nature. Once you test a Docker container and verify that it functions as expected, that same container will run on any system to which you deploy it. &lt;/p&gt;

&lt;p&gt;Unlike Lambda, Docker isn't inherently "serverless". Docker is best thought of as a packaging and deployment mechanism. There are multiple ways on AWS to run a Docker container, including: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;a href="https://aws.amazon.com/ecs/"&gt;Elastic Container Service&lt;/a&gt;. ECS is AWS's scalable, enterprise-grade solution for running Docker containers. Containers can be deployed either on an Amazon EC2 cluster hosted in your AWS account or using Fargate, AWS's serverless container deployment solution. (For more, check out my recent article on  &lt;a href="https://blog.tinystacks.com/ecs-serverless-or-not-fargate-vs-ec2-clusters"&gt;using EC2 clusters vs. Fargate for your Docker deployments&lt;/a&gt;.)&lt;/li&gt;
&lt;li&gt; &lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html"&gt;Elastic Beanstalk&lt;/a&gt;.  AWS's "all-in-one" deployment technology will run your Docker container on a Docker-enabled EC2 instance. &lt;/li&gt;
&lt;li&gt; &lt;a href="https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/"&gt;As an AWS Lambda Function&lt;/a&gt; . Here's where things get &lt;em&gt;really&lt;/em&gt; confusing! Yes, you can implement code in a Docker container and expose it via a Lambda function. I'll talk a little about who you might want to do this below. &lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Microservices, Served Two Ways
&lt;/h1&gt;

&lt;p&gt;Both AWS Lambda and Docker containers are solid choices for deploying microservices architectures on AWS: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda functions map handily to REST API endpoints. You can use Lambda functions in conjunction with AWS API Gateway to quickly build out a REST API complete with advanced features such as user authentication and API throttling. &lt;/li&gt;
&lt;li&gt;Docker makes it easy to implement REST APIs using your favorite REST API framework - such as Node.js, Flask, Django, and many others. Because a Docker container is a deployable unit, you can easily partition your REST APIs into logical units and manage them through separate CI/CD pipelines.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Lambda vs. Docker: Who Wins?
&lt;/h1&gt;

&lt;p&gt;But this raises the perennial question: Which one is &lt;em&gt;better&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The first thing to point out is that this isn't necessarily an either/or question.&lt;/strong&gt; Both Lambda and Docker are powerful technologies that development teams may choose to combine within a single project. For example, you may decide to implement your microservice as a series of Docker containers, and then use Amazon Simple Queue Service in conjunction with AWS Lambda functions to implement a loosely coupled communications framework between services. &lt;/p&gt;

&lt;p&gt;But let's set that aside for now and focus on a narrower question: Which technology should you choose &lt;strong&gt;when implementing a microservices architecture&lt;/strong&gt;? &lt;/p&gt;

&lt;p&gt;As with most things in the world of the Cloud, there's no clear-cut answer here. But let's look at a few factors you should consider when making this decision for your own project. &lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Languages
&lt;/h2&gt;

&lt;p&gt;When it comes to choice of programming languages and frameworks, Docker is the clear winner. AWS Lambda's support for programming languages is limited to the languages for which it defines an integration API. Docker, meanwhile, can host any language or framework that can run on a Dockerized Linux or Windows operating system. &lt;/p&gt;

&lt;h2&gt;
  
  
  Portability
&lt;/h2&gt;

&lt;p&gt;The language and framework issue leads me to another issue: &lt;strong&gt;cloud lock-in&lt;/strong&gt;. AWS Lambda isn't an industry standard - it's AWS's proprietary serverless tech. If you need to move to a new cloud provider (Azure, GCP) for any reason, your code may require significant rework to function on the new provider's equivalent serverless solution.  &lt;/p&gt;

&lt;p&gt;By contrast, Docker is pretty much a &lt;em&gt;de facto&lt;/em&gt; standard. A Docker container that works on AWS's ECS will also run on  &lt;a href="https://azure.microsoft.com/en-us/services/app-service/containers/"&gt;Azure App Service&lt;/a&gt;, &lt;a href="https://cloud.google.com/run"&gt;Google Cloud Run&lt;/a&gt;, and  &lt;a href="https://kubernetes.io/"&gt;Kubernetes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you still want to leverage Lambda but are concerned about portability, I'd recommend  &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html"&gt;following AWS's recommendations around Lambda code design&lt;/a&gt;. You can easily separate your function's execution logic out from the Lambda execution environment. This reduces your dependency on Lambda and makes your code more portable. &lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling
&lt;/h2&gt;

&lt;p&gt;If your microservice could potentially be called hundreds of thousands of millions of times a day (or even &lt;em&gt;hour&lt;/em&gt;), you'll want to ensure it can scale automatically to meet user demand. Fortunately, both AWS Lambda and Docker offer plenty of options to create a highly scalable microservice. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html"&gt;AWS Lambda creates an instance of your function to serve traffic to users&lt;/a&gt; . As that instance reaches capacity, Lambda will automatically create new instances of your function to meet demand. Lambda can "burst" from between 500 up to 3,000 instances per region to handle sudden traffix influxes, and can then scale up to 500 new instances every minute. &lt;/p&gt;

&lt;p&gt;AWS also provides multiple options for scaling Docker containers. Containers deployed using Fargate, AWS's serverless container deployment solution,  &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-service-auto-scaling/"&gt;can be configured to scale out based on Amazon CloudWatch alarms&lt;/a&gt;. If you're deploying Docker containers to an EC2 cluster in your AWS account, you can even  &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-auto-scaling.html"&gt;scale out the size of your cluster&lt;/a&gt; .&lt;/p&gt;

&lt;h2&gt;
  
  
  Execution Speeds
&lt;/h2&gt;

&lt;p&gt;In general, both AWS Lambda and Docker containers can be configured to provide the performance required by most applications. &lt;/p&gt;

&lt;p&gt;However, I'd be remiss if I didn't note &lt;strong&gt;the infamous Lambda cold start issue&lt;/strong&gt;. Remember above how I said that Lambda will create a new instance of your function when it needs to scale out. This process requires time: the Lambda function code has to be downloaded to an EC2 instance in AWS's Lambda server farm, and the execution environment and its associated dependencies also take time to load and start. This is known as a &lt;strong&gt;cold start&lt;/strong&gt;. It has a particularly hard impact on Java and .NET applications, both of which have weighty runtime environments. &lt;/p&gt;

&lt;p&gt;Fortunately, as Mike Roberts at Symphonia points out,  &lt;a href="https://blog.symphonia.io/posts/2020-06-30_analyzing_cold_start_latency_of_aws_lambda"&gt;cold start isn't an issue for high-demand applications&lt;/a&gt;. It only becomes a factor in low-execution environments - e.g., when using a Lambda function as a callback from another AWS service, such as CodePipeline. &lt;/p&gt;

&lt;h2&gt;
  
  
  Application Dependencies
&lt;/h2&gt;

&lt;p&gt;When it comes to dependency management - libraries that your application depends upon - Docker is king. As I discussed earlier, a Docker container is a self-contained package containing everything your application needs to run. &lt;/p&gt;

&lt;p&gt;It's also possible to ship dependencies with your AWS Lambda functions as part of the function's ZIP file. However, things get complicated when you need to package OS-native dependencies. Furthermore, Lambda packages max out at 250MB, which can be an issue when packaging large dependency frameworks. &lt;/p&gt;

&lt;p&gt;Fortunately, AWS Lambda's support for Docker containers means you can get the best of both worlds. By implementing your functions as Docker containers, you can package any dependency your application requires and ensure it always runs as intended. Docker containers on AWS Lambda can be up to 10GB in size, which is plenty of space for the vast majority of applications. &lt;/p&gt;

&lt;h2&gt;
  
  
  Long-Running Tasks
&lt;/h2&gt;

&lt;p&gt;If your code is doing some sort of batch processing - processing DynamoDB events, filtering an Amazon Kinesis stream, generating large images, etc. - you'll need to concern yourself with execution times. Lambda functions can only run for up to 15 minutes before the service will time out. By contrast, Docker containers have no built-in limitations on workload runtimes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment and Management
&lt;/h2&gt;

&lt;p&gt;As I mentioned earlier, Docker provides a simple and easy-to-understand deployment model that enables packaging a single microservice into a single Docker container. This is where AWS Lambda has often been at a disadvantage: since Lambda is a function-based service, it's proven more challenging to manage an entire service or application as a collection of interconnected Lambda functions. &lt;/p&gt;

&lt;p&gt;Fortunately, new tools have come out over the past several years to address exactly this problem.  &lt;a href="https://aws.amazon.com/serverless/sam/"&gt;AWS's Serverless Application Model (SAM)&lt;/a&gt;  enables developers to design, develop, and deploy entire serverless apps directly onto AWS using Lambda and CloudFormation. Other tools, such as the open-source project  &lt;a href="https://www.serverless.com/"&gt;Serverless&lt;/a&gt;, aim to create similar zero-infrastructure deployment experiences for serverless applications on AWS and other cloud providers. &lt;/p&gt;

&lt;h2&gt;
  
  
  Cost
&lt;/h2&gt;

&lt;p&gt;In general, a "serverless" solution is going to cost you more than a non-serverless solution. We at TinyStacks discovered this recently  &lt;a href="https://blog.tinystacks.com/ecs-serverless-or-not-fargate-vs-ec2-clusters"&gt;when we moved all of our container workloads from Fargate to our own ECS EC2 clusters&lt;/a&gt;, resulting in a cost savings of 40%. &lt;/p&gt;

&lt;p&gt;While we haven't done any direct cost comparisons with AWS Lambda, evidence from others suggests that it's one of the least cost-effective solutions going. An analysis this year by Eoin Shanaghy and Steef-Jan Wiggers on InfoQ  &lt;a href="https://www.infoq.com/articles/aws-lambda-price-change/"&gt;found that running a workload on AWS Lambda can cost up to 7.5 times more&lt;/a&gt;  than running the same workload on AWS Fargate with spot capacity. Given that we manage to run our workloads at a 40% discount on EC2 clusters compared to AWS Fargate, this shows you just how pricey Lambda really is.&lt;/p&gt;

&lt;h1&gt;
  
  
  Our Recommendation
&lt;/h1&gt;

&lt;p&gt;For large-scale microservice workloads, we've found that running Docker containers on our own tightly managed EC2 cluster using ECS to be the ideal solution. &lt;/p&gt;

&lt;p&gt;You may get good mileage from using Lambda selectively for smaller-scale workloads. However, we would recommend implementing your code in Docker containers wherever possible - even when Lambda is your preferred deployment mechanism. Docker containers not only port well across cloud providers but can also be used with numerous AWS services. This makes it easy to change your deployment and hosting strategy in response to your company's changing needs. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>CRUD API Express with RDS, ECS and Docker</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Fri, 17 Sep 2021 13:24:54 +0000</pubDate>
      <link>https://dev.to/tinystacks/crud-api-express-with-rds-ecs-and-docker-46fg</link>
      <guid>https://dev.to/tinystacks/crud-api-express-with-rds-ecs-and-docker-46fg</guid>
      <description>&lt;h3&gt;
  
  
  Video Version
&lt;/h3&gt;

&lt;p&gt;Do you prefer the Video Version? &lt;br&gt;
&lt;a href="https://youtu.be/0sbkdX4zTWE" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631882978925%2F6wUydrxDn.png" alt="youtube video"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will see how we can connect an ECS instance, based on an image on ECR, to an RDS Postgres instance. &lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Docker installed on your machine&lt;/li&gt;
&lt;li&gt;AWS account&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Definitions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;RDS&lt;/strong&gt;: Relational Database Service. The AWS service for relational databases such as Postgres. (For more on RDS and Postgres, see my previous article.)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ECR&lt;/strong&gt;: Elastic Container Registry. Stores Docker images directly on AWS (essentially, an alternative to &lt;a href="https://dev.toDocker%20Hub%20Container%20Image%20Library%20|%20App%20Containerization"&gt;Docker Hub&lt;/a&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ECS&lt;/strong&gt;: Elastic Container Service. Deploy and run an application based on an image stored on a registry (it works with both Docker Hub and ECR). &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Our Steps Today
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create an RDS Postgres instance&lt;/li&gt;
&lt;li&gt;Test the instance&lt;/li&gt;
&lt;li&gt;Create the ECR repository using the AWS command line interface&lt;/li&gt;
&lt;li&gt;Clone the repository&lt;/li&gt;
&lt;li&gt;Create the Docker image&lt;/li&gt;
&lt;li&gt;Tag the image accordingly to the ECR repository&lt;/li&gt;
&lt;li&gt;Push the image to ECR&lt;/li&gt;
&lt;li&gt;Create the ECS based on the ECR repository, setting env variables&lt;/li&gt;
&lt;li&gt;Final Test&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Create the RDS Postgres Instance
&lt;/h3&gt;

&lt;p&gt;Go on the AWS console and search for RDS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631698563522%2FEs3q6e62y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631698563522%2FEs3q6e62y.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then click on &lt;strong&gt;Create Database&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631698607584%2F7UOO_06cj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631698607584%2F7UOO_06cj.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's create a PostgreSQL instance. We’ll use version 12.5-R1 so we can take advantage of AWS’ free tier:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631698721411%2FFgejgCWNF.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631698721411%2FFgejgCWNF.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;Settings&lt;/strong&gt;, input values for the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DB instance identifier (the name)&lt;/li&gt;
&lt;li&gt;Master user&lt;/li&gt;
&lt;li&gt;Master password + Confirm password (choose a reasonably secure password)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631698896995%2FsNQNKb0L_.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631698896995%2FsNQNKb0L_.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For connectivity, you must be sure that the instance is accessible from the outside. Under &lt;strong&gt;Public access&lt;/strong&gt;, select &lt;strong&gt;Yes&lt;/strong&gt;  If you have network issues, check your security group’s inbound rules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631699045293%2Fv97e69A5L.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631699045293%2Fv97e69A5L.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you’ve finished, click &lt;strong&gt;Create database&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631699267814%2FJkbsUK78O.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631699267814%2FJkbsUK78O.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a review of our RDS Postgres instance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631699453370%2FQ2Oo0ByDi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631699453370%2FQ2Oo0ByDi.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Test the Instance
&lt;/h3&gt;

&lt;p&gt;To test if the RDS instance is accessible, we can use the &lt;code&gt;psql&lt;/code&gt; command. You can also test with other command-like tools like &lt;code&gt;pgadmin&lt;/code&gt; or your local application. &lt;/p&gt;

&lt;p&gt;In the command below, replace &lt;code&gt;RDS_INSTANCE_IP&lt;/code&gt; with the one you get from the RDS instance summary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;psql --host RDS_INSTANCE_IP --port 5432 --username postgres
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the ECR Repository Using the Command Line Interface
&lt;/h3&gt;

&lt;p&gt;ECR stands for Elastic Container Registry, and it's the image registry for AWS. Think about it as a place to store and retrieve your Docker images.&lt;/p&gt;

&lt;p&gt;In the AWS Console, type &lt;code&gt;ECR&lt;/code&gt; on the search bar and click on &lt;strong&gt;Elastic Container Registry&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631694550069%2Fwb75EZ1JV.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631694550069%2Fwb75EZ1JV.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The UI interface looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631694838453%2FDw6Hqq20b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631694838453%2FDw6Hqq20b.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a good way to check your existing repositories. But to create one, we’ll use the command-line interface.&lt;/p&gt;

&lt;p&gt;Get your credentials using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;aws sts get-caller-identity
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then use the credentials and the region you prefer.  eplace  with the region of your choice, and replace  with your AWS account ID (you can get it with the commands).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;aws ecr get-login-password --region &amp;lt;REGION&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;| docker login &lt;span class="nt"&gt;--username&lt;/span&gt; AWS &lt;span class="nt"&gt;--password-stdin&lt;/span&gt; &amp;lt;AWS_ACCOUNT_ID&amp;gt;.dkr.ecr.&amp;lt;REGION&amp;gt;.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's check if the repository has been created by checking the AWS Console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695016613%2F46TgTux2H.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695016613%2F46TgTux2H.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nice! Now let's clone and work on the repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clone the Repository
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631693703897%2FhqeeuJpQ0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631693703897%2FhqeeuJpQ0.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clone the aws-express-template repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;git clone https://github.com/tinystacks/aws-docker-templates-express.git
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, CD into the directory on the command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;cd aws-docker-templates-express
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and open the project with your favorite IDE. If you have Visual Studio Code, you can type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;code .
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Check App and Create the Docker Image
&lt;/h3&gt;

&lt;p&gt;If you want to test the project locally, you can install the dependencies (optional - requires npm installed locally):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npm i
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To build the projects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npm run build
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npm run start
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we build the image, let's check the file inside the config folder called &lt;code&gt;postgres.ts&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here you can define some environment variables to access your database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;PG_HOST&lt;/code&gt;: The address of the database. We’ll use the RDS instance address here later.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PG_PORT&lt;/code&gt;: The port of the database. The default one for Postgres is 5432.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PG_USER&lt;/code&gt;: The default user of the database&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PG_PASSWORD&lt;/code&gt;: The password for the user of the database.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PG_DATABASE&lt;/code&gt;: The database we want to access. Note that a database called &lt;code&gt;postgres&lt;/code&gt; is the default for a Postgres instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631693966439%2Fwx0ehNYYR.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631693966439%2Fwx0ehNYYR.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To build the image with Docker, use this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;docker build -t crud-express .
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The name doesn't really matter here, as we will retag the local image in order to push it to the ECR repository we will create soon.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tag the Image to the ECR Repository
&lt;/h3&gt;

&lt;p&gt;To tag the local image in order to push it to the ECR repository, you need to copy the image URI. For example, you can copy it from the Amazon Console’s list of your repositories in ECR:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695275982%2FDJ7WuwwFa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695275982%2FDJ7WuwwFa.png" alt="image.png"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;docker tag crud-express &amp;lt;AWS_ECR_REPO_URI&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;  
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Push the Image to ECR
&lt;/h3&gt;

&lt;p&gt;Just use the same tag as before to push the image tagged locally to your ECR repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;docker push  &amp;lt;AWS_ECR_REPO_URI&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;  
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this, wait a couple of minutes for the push to complete. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695485489%2FRWkG77sZc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695485489%2FRWkG77sZc.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create and ECS Task from the ECR Repository Image
&lt;/h3&gt;

&lt;p&gt;Now comes the interesting part. Since we have both:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an RDS Postgres instance with public access&lt;/li&gt;
&lt;li&gt;&lt;p&gt;an image on the ECR registry&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;we can create an ECS instance based on the ECR image, and connect it to the RDS instance using the RDS instance's URI by supplying the &lt;code&gt;PG_HOST&lt;/code&gt; variable to our application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the AWS Console, look for ECS: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695699922%2F4rJx8zdG3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695699922%2F4rJx8zdG3.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s use the Console to configure a custom container:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695862010%2FvPWRQLJu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695862010%2FvPWRQLJu6.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose a container name of your choice. Use the ECR URI as your Docker image: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695952877%2FNLuIuXuJx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695952877%2FNLuIuXuJx.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set the port to 80:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695990264%2FNb2Fp0Ijt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631695990264%2FNb2Fp0Ijt.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now a very important step - set the environment variable as follows: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key :&lt;code&gt;PG_HOST&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Value: Your RDS URI so the ECS app can connect to the RDS instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696107648%2Fg589fmG9U.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696107648%2Fg589fmG9U.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, click on &lt;strong&gt;Update&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696254364%2FXb0fCUPSM2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696254364%2FXb0fCUPSM2.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On &lt;strong&gt;Task definition&lt;/strong&gt;, you can just click Next:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696303767%2FBjCV1x3eh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696303767%2FBjCV1x3eh.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On &lt;strong&gt;Define your service&lt;/strong&gt;, also click &lt;strong&gt;Next&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696529801%2FL3JWTTVZV.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696529801%2FL3JWTTVZV.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the cluster, you can choose a name for your cluster and then click &lt;strong&gt;Next&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696469132%2FWf8X7pad8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696469132%2FWf8X7pad8.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you just have to wait a couple of minutes to let AWS create your resources:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696605618%2FVOql30ScZ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696605618%2FVOql30ScZ.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once it's done, click on the task:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696768532%2FEwNXXA7mm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696768532%2FEwNXXA7mm.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down and copy the Public IP so we can use with with our favorite API tester:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696872954%2FtUY6wIkx7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631696872954%2FtUY6wIkx7.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Test
&lt;/h3&gt;

&lt;p&gt;To test our application, we will use Postman. First of all, let's check if the app is up and running. Make a GET request at the endpoint &lt;code&gt;AWS_APP_IP:80/ping&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697282351%2FZKwbHY_hO.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697282351%2FZKwbHY_hO.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's make a couple of inserts into the database. Make a PUT request with the following body (title and content) at the endpoint &lt;code&gt;AWS_APP_IP:80/postgresql-item&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697379486%2F5OJ4oKQgD.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697379486%2F5OJ4oKQgD.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's make another one:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697402750%2F6aRxTfjuG.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697402750%2F6aRxTfjuG.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, to get all the items, make a GET request at the endpoint &lt;code&gt;AWS_APP_IP:80/postgresql-item&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697471684%2FWQxL7D9B3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697471684%2FWQxL7D9B3.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get a single item, make the same request appending the id of the item at the end of the url&lt;br&gt;
(note that we are not handling errors properly here - this is for demo purposes):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697544449%2F09TSnN4gL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697544449%2F09TSnN4gL.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To update an existing item, you can make a POST request to the endpoint &lt;code&gt;AWS_APP_IP:80/posgresql-item/1&lt;/code&gt;, specifying an id and passing a message body: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697642528%2FM1FDkuHTA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697642528%2FM1FDkuHTA.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's check that the values were updated:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697847772%2F1sd1TECz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697847772%2F1sd1TECz8.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also delete an existing item, making a DELETE request at the endpoint &lt;code&gt;AWS_APP_IP:80/postgresql-item/ID&lt;/code&gt; (e.g. 2):&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697934173%2FUvSN1CfV0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631697934173%2FUvSN1CfV0.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And with that we’ve successfully validated connecting an ECS task to a Amazon RDS database!&lt;/p&gt;

&lt;h3&gt;
  
  
  Video Version
&lt;/h3&gt;

&lt;p&gt;Do you prefer the Video Version? &lt;br&gt;
&lt;a href="https://youtu.be/0sbkdX4zTWE" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1631882978925%2F6wUydrxDn.png" alt="youtube video"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>ECS: Serverless or Not? Fargate vs. EC2 Clusters</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Fri, 17 Sep 2021 04:42:44 +0000</pubDate>
      <link>https://dev.to/tinystacks/ecs-serverless-or-not-fargate-vs-ec2-clusters-11ch</link>
      <guid>https://dev.to/tinystacks/ecs-serverless-or-not-fargate-vs-ec2-clusters-11ch</guid>
      <description>&lt;p&gt;When it comes to deploying Docker containers on AWS, developers have two choices: Elastic Container Service (ECS) EC2 clusters and Fargate. But which one is right for &lt;strong&gt;your&lt;/strong&gt; application? In this article, I look at the pros and cons of each - and discuss why we recently made a massive change in our own strategy at Tinystacks.&lt;/p&gt;

&lt;p&gt;Article By Jay Allen&lt;/p&gt;

&lt;h2&gt;
  
  
  ECS EC2 Clusters vs. Fargate
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=eN_O4zd4D9o&amp;amp;list=PLPoSdR46FgI5wOJuzcPQCNqS37t39zKkg&amp;amp;index=2"&gt;Docker containers&lt;/a&gt; have become so popular because they're a great way to package an application with all of the files, libraries, and configuration it needs to operate properly. On AWS, ECS provides an easy way to deploy, run, and manage Docker containers at any scale. &lt;/p&gt;

&lt;p&gt;If you're unfamiliar with ECS,  &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html"&gt;you'll want to check out the AWS documentation&lt;/a&gt; for an overview of key concepts. In brief, ECS enables running Docker images by defining &lt;strong&gt;services&lt;/strong&gt; that are comprised of one or more &lt;strong&gt;tasks&lt;/strong&gt;, with each task being a running instance of a specific Docker container. &lt;/p&gt;

&lt;p&gt;Of course, running a Docker container requires having machines to run them &lt;em&gt;on&lt;/em&gt;. In ECS, this is abstracted into the idea of an &lt;strong&gt;ECS cluster&lt;/strong&gt;, a logical grouping of services and tasks. Developers have two choices in how to create and manage ECS clusters. &lt;/p&gt;

&lt;p&gt;The first choice is by creating an Amazon EC2 cluster. In this scenario, you use an Amazon EC2 virtual machine image to create one or more VMs that are hosted in your AWS account. You can then run tasks across the instances of your cluster. &lt;/p&gt;

&lt;p&gt;The second, more recent choice is Fargate. With Fargate, the hardware and virtual machines on which your Docker containers run are managed completely by AWS as a "serverless" service.&lt;/p&gt;

&lt;p&gt;It shouldn't come as a surprise that, as totally different services - one server-based, one serverless - Fargate and EC2 clusters use different pricing models. With EC2 clusters, you pay for only the EC2 compute capacity and Elastic Block Storage (EBS) capacity that you use. By contrast, Fargate charges for usage on a per-minute basis, with charges varying based on the amount of virtual CPU (vCPU) and memory your containers use. &lt;/p&gt;

&lt;p&gt;Fargate and EC2 clusters are different means to the same end: running your Docker containers in a scalable manner. But each can have advantages over the other, depending on your specific scenario. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Advantages of Fargate
&lt;/h2&gt;

&lt;p&gt;As with any serverless service, the allure of Fargate comes in ease of management. If you manage your own EC2 clusters, you have to worry about a whole host of operational issues - VM security, operating system patching and maintenance, and uptime. Since Fargate uses capacity managed by AWS, you needn't worry about ensuring EC2 instances remain healthy and secure - AWS does this for you. &lt;/p&gt;

&lt;p&gt;Using Fargate can also lead to operational efficiencies. With EC2 clusters, you run two key operational risks: &lt;strong&gt;underprovisioning&lt;/strong&gt;, or not creating enough instances to meet the demands of your workload; and &lt;strong&gt;overprovisioning&lt;/strong&gt;, or overpaying for &lt;em&gt;too much&lt;/em&gt; capacity that you end up not using. With Fargate, you only pay for container runtime - never for unused VM capacity. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Advantages of EC2 Clusters
&lt;/h2&gt;

&lt;p&gt;However, that doesn't mean that Fargate is always the best choice. There are several compelling reasons why you may opt for using EC2 clusters instead. &lt;/p&gt;

&lt;p&gt;The key advantage of EC2 clusters is price. While Fargate is easy and convenient, that convenience comes at a cost. Fargate has come under fire from the developer community for being expensive compared to EC2 clusters. Indeed, AWS itself has stated that,  &lt;a href="https://aws.amazon.com/blogs/containers/theoretical-cost-optimization-by-amazon-ecs-launch-type-fargate-vs-ec2/"&gt;the more you can maximize a cluster's vCPU and memory utilization, the more cost-effective EC2 clusters become&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Additionally, EC2 clusters may bring your customers additional peace of mind in terms of security. While AWS works hard to ensure complete isolation of tasks running on Fargate, companies in sensitive industries such as finance and health care may be wary about their workloads running alongside other arbitrary processes. &lt;/p&gt;

&lt;h2&gt;
  
  
  Our Experience at TinyStacks
&lt;/h2&gt;

&lt;p&gt;At TinyStacks, we work hard to provide an end-to-end deployment experience on AWS that frees development teams to focus on their application code - not on DevOps infrastructure. Since all TinyStacks-enabled applications are deployed as Docker containers running on ECS, we're very keen on optimizing our ECS usage for performance, scalability, and cost. &lt;/p&gt;

&lt;p&gt;Initially, we used Fargate clusters exclusively for our DevOps stack deployments. However, after running some numbers, we concluded that shifting to our own EC2 clusters might be more cost-effective. We ran some tests using EC2 clusters with  &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cluster-auto-scaling.html"&gt;Amazon ECS cluster auto scaling&lt;/a&gt;  enabled, scaling out our clusters when instances were maxed at over 75% CPU utilization for 5 minutes, and scaling in when they were under that threshold for the same amount of time. We also configured ECS service scaling and ensured it was synchronized with cluster scaling. &lt;/p&gt;

&lt;p&gt;What we found was pretty astounding: by maximizing cluster utilization, &lt;strong&gt;we were able to reduce our ECS spend with EC2 clusters by 40% when compared with Fargate&lt;/strong&gt;. The smallest cost savings came with larger instances. An EC2 m5.xlarge with 4 vCPU and 16GiB of RAM came out to $138.24/month compared to a similar-sized Fargate cluster, which came out to around $167.7888/month - an 18% cost difference. But the smallest instance size we used - a t3.nano with 2 vCPU and 0.5GiB RAM - was a mere $3.744/month. Compare that to Fargate's smallest instance type, a .5 vCPU, 1GiB instance, which cost us a full $17.7732/month. That's a full 79% cost savings. &lt;/p&gt;

&lt;p&gt;Based on these results, we moved all of our ECS workloads from Fargate onto our own EC2 clusters. All of our customers will now receive the benefits of EC2 cluster hosting for ECS including, not just reduced cost, but increased security and scalability. We believed these numerous advantages made the decision a no-brainer. &lt;/p&gt;

&lt;p&gt;In short, Fargate definitely has some advantages in terms of ease of use and maintenance. But in terms of cost, EC2 cluster hosting for ECS is by far the clear winner. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Migrating DigitalOcean Spaces to an Amazon S3 Bucket</title>
      <dc:creator>Francesco Ciulla</dc:creator>
      <pubDate>Thu, 09 Sep 2021 15:28:28 +0000</pubDate>
      <link>https://dev.to/tinystacks/migrating-digitalocean-spaces-to-an-amazon-s3-bucket-3e27</link>
      <guid>https://dev.to/tinystacks/migrating-digitalocean-spaces-to-an-amazon-s3-bucket-3e27</guid>
      <description>&lt;p&gt;DigitalOcean Spaces provides Amazon S3-compatible object storage with a simplified pricing model. However, you may at some point find that you need to move your storage off of Spaces and onto Amazon S3. In this post, I'll show how to use the tool  &lt;a href="https://rclone.org/" rel="noopener noreferrer"&gt;Rclone&lt;/a&gt; to move your data from Spaces to S3 quickly and easily. &lt;/p&gt;

&lt;p&gt;Original Article By Jay Allen&lt;/p&gt;

&lt;h1&gt;
  
  
  Spaces vs. Amazon S3
&lt;/h1&gt;

&lt;p&gt;Built on the object storage system Ceph, Spaces provides a competitive storage alternative to S3. The base Spaces plan charges a flat $5/month for up to 250GiB of storage and up to 1TiB of data transfer out. That can represent a nice cost saving over Amazon S3, where only the first GiB of data transfer to the Internet is free. And since Spaces is fully S3-compatible, SDK code that works with S3 will work with a Spaces account. Spaces even offers a Content Delivery Network (CDN) at no additional cost. &lt;/p&gt;

&lt;p&gt;However, there may be times when you need to bring your data in Spaces over to Amazon S3: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have data security requirements that are well met by features such as  &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html" rel="noopener noreferrer"&gt;AWS PrivateLink for S3&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;You need your data in more regions than DigitalOcean supports (five regional endpoints as opposed to AWS's 24), or need to store data in a region supported by AWS to comply with data protection laws&lt;/li&gt;
&lt;li&gt;You find that transfer from S3 to other AWS features is faster than transfer from Spaces for your scenario&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whatever the reason, it would be great if you could migrate your data over &lt;em&gt;en masse&lt;/em&gt; without having to roll your own script. &lt;/p&gt;

&lt;h1&gt;
  
  
  Moving from Spaces to S3 with Rclone
&lt;/h1&gt;

&lt;p&gt;Fortunately, the  &lt;a href="https://rclone.org/" rel="noopener noreferrer"&gt;Rclone&lt;/a&gt;  tool makes this easy. Rclone is a self-described Swiss army knife for storage that supports  &lt;a href="https://rclone.org/#providers" rel="noopener noreferrer"&gt;over 40 different cloud storage products and storage protocols&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Let's walk through this to show just how easy it is. For this walkthrough, I've created a Space on DigitalOcean that contains some random binary files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630381724597%2FNxwxFeZ62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630381724597%2FNxwxFeZ62.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We'll want to transfer this into an Amazon S3 bucket in our AWS account. I've created the following bucket for this purpose: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630379953904%2FuPlFwfb0G.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630379953904%2FuPlFwfb0G.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing AWS CLI and Rclone
&lt;/h2&gt;

&lt;p&gt;Rclone will make use of  &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html" rel="noopener noreferrer"&gt;the AWS CLI&lt;/a&gt;. If you don't have it installed, install and  &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html" rel="noopener noreferrer"&gt;configure it with an access key and secret key&lt;/a&gt;  that has access to your AWS account. &lt;/p&gt;

&lt;p&gt;You'll also need to install Rclone. On Linux/Mac/BSD systems, you can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://rclone.org/install.sh | sudo bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or, you can install using Homebrew:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install rclone
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On Windows systems,  &lt;a href="https://rclone.org/downloads/" rel="noopener noreferrer"&gt;download and install the appropriate executable&lt;/a&gt; from the Rclone site. Make sure to add rclone to your system's PATH afterward so that the subsequent commands in this tutorial work. &lt;/p&gt;

&lt;h2&gt;
  
  
  Obtaining Your Spaces Connection Information
&lt;/h2&gt;

&lt;p&gt;To use Rclone to perform the copy, you'll need to create an rclone.conf file that enables Rclone to connect to both your AWS S3 bucket and to your Spaces space. &lt;/p&gt;

&lt;p&gt;If you've set up your AWS CLI, you already have your access key and secret key for AWS. You will need two pieces of information from Spaces: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The URL to the endpoint for your Space; and &lt;/li&gt;
&lt;li&gt;An access key and secret key from DigitalOcean provides access to your Space. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Obtaining your Spaces endpoint is easy: just navigate to your Space in DigitalOcean, where you'll see the URL for your Space. The endpoint you'll use is the regional endpoint without the name of your space (the part highlighted in the red rectangle below): &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630384029277%2FHuogEA09h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630384029277%2FHuogEA09h.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create an access key and secret key for Spaces,  &lt;a href="https://cloud.digitalocean.com/account/api/tokens" rel="noopener noreferrer"&gt;navigate to the API page on DigitalOcean&lt;/a&gt;. Underneath the section Spaces access keys, click &lt;strong&gt;Generate New Key&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630383067497%2FiBV5PKeim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630383067497%2FiBV5PKeim.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give your key a name and then click the blue checkmark next to the name field. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630383127879%2FxNoPd7n5Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630383127879%2FxNoPd7n5Q.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Spaces will generate an access key and a secret token for you, both listed under the column &lt;strong&gt;Key&lt;/strong&gt;. (The actual values in the screenshot below have been blurred for security reasons.) Leave this screen as is - you'll be using these values in just a minute. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630383240015%2FTgdDXwN-C.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630383240015%2FTgdDXwN-C.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure rclone.conf File and Perform Copy
&lt;/h2&gt;

&lt;p&gt;Now you need to tell Rclone how to connect to each of the services. To do this, create an rclone.conf file in &lt;code&gt;~/.config/rclone/rclone.conf&lt;/code&gt; (Linux/Mac/BSD) or in &lt;code&gt;C:\Users\&amp;lt;username&amp;gt;\AppData\Roaming\rclone\rclone.conf&lt;/code&gt; (Windows). The file should use the following format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[s3]
type = s3
env_auth = false
access_key_id = AWS_ACCESS_KEY
secret_access_key = AWS_SECRET
region = us-west-2
acl = private

[spaces]
type = s3
env_auth = false
access_key_id = SPACES_ACCESS_KEY
secret_access_key = SPACES_SECRET
endpoint = sfo3.digitaloceanspaces.com
acl = private
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;AWS_ACCESS_KEY&lt;/code&gt; and &lt;code&gt;AWS_SECRET&lt;/code&gt; with your AWS credentials, and &lt;code&gt;SPACES_ACCESS_KEY&lt;/code&gt; and &lt;code&gt;SPACES_SECRET&lt;/code&gt; with your Spaces credentials. Also make sure that: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;s3.region&lt;/code&gt; lists the correct region for the bucket you plan to copy data into; &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;spaces.endpoint&lt;/code&gt; is pointing to the correct Spaces region. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To test your connection to Amazon S3, save this file and, at a command prompt, type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rclone lsd s3:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you configured everything correctly, you should see a list of your Amazon S3 buckets. &lt;/p&gt;

&lt;p&gt;Next, test your connection to Spaces with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rclone lsd spaces:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see a list of all Spaces you have created in that region. &lt;/p&gt;

&lt;p&gt;If everything checks out, go ahead and copy all of your data from Spaces to Amazon S3 using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rclone sync spaces:jayallentest s3:jayallen-spaces-test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Make sure to replace the Space name and S3 bucket name with the values appropriate to your accounts.)&lt;/p&gt;

&lt;p&gt;The Rclone command line won't give you any direct feedback even if the operation is successful. However, once it returns, you should see all of the data from your Spaces account now located in your Amazon S3 bucket: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630385103210%2F_EyRkr51L.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1630385103210%2F_EyRkr51L.png" alt="image.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it! With just a little setup and configuration, you can now easily transfer data from DigitalOcean Spaces to Amazon S3. &lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
