<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dickson Victor</title>
    <description>The latest articles on DEV Community by Dickson Victor (@techcrux).</description>
    <link>https://dev.to/techcrux</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/techcrux"/>
    <language>en</language>
    <item>
      <title>Getting Started with Amazon ECS Express Mode</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Wed, 24 Dec 2025 17:24:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/getting-started-with-amazon-ecs-express-mode-37kp</link>
      <guid>https://dev.to/aws-builders/getting-started-with-amazon-ecs-express-mode-37kp</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqddrmqgy977v0g57yvr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqddrmqgy977v0g57yvr.png" alt="Jake" width="612" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Meet Jake, a backend developer, who just wanted to deploy his Node.js API to ECS. Three hours in, he was Googling "what is a target group" and "do I need a NAT gateway", while his Docker container sat there, perfectly functional, mocking him from his laptop. He'd configured VPCs, subnets, load balancers, security groups, and IAM roles—basically becoming an accidental cloud architect just to run one container. His manager's Slack message still haunted him: "Should be a quick deploy, right?"&lt;/p&gt;

&lt;p&gt;Then Amazon announced ECS Express Mode. Now Jake enters his container image, picks two IAM roles, and clicks Create. Done. ECS handles the networking, load balancing, and scaling automatically—even gives him an HTTPS endpoint. He deployed in 10 minutes and actually went home on time. The infrastructure is still there when he needs to customize it, but for once, ECS doesn't make him fight for it.&lt;/p&gt;

&lt;p&gt;Amazon ECS Express Mode service reduces the complexity of deploying containerized applications by providing sensible defaults and automating the configuration of supporting AWS services. It orchestrates and configures all necessary infrastructure: a Fargate-based ECS service with a unique accessible URL, load balancer with SSL/TLS ( can automatically consolidate up to 25 Express Mode services behind a single ALB), auto scaling policies (automatically scale based on utilization or traffic), monitoring, and networking components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here’s What Jake Benefits By Using ECS Express Mode:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Simplified Deployment&lt;/strong&gt;&lt;br&gt;
Remember Jake's three-hour odyssey through AWS documentation? That's now a one-liner in the CLI or a few clicks in the console. You provide your container image and a couple of IAM roles, and ECS Express Mode does the heavy lifting. No more tab-hopping between CloudFormation docs, ECS guides, and that one Stack Overflow answer from 2019 that might still be relevant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Automated Infrastructure&lt;/strong&gt;&lt;br&gt;
Express Mode automatically provisions everything Jake was manually configuring at 11 PM: ECS clusters, Application Load Balancers, auto-scaling policies, VPC networking, security groups, and CloudWatch Logs. It's like having a senior DevOps engineer who actually reads AWS documentation and doesn't make typos in YAML files. All the pieces that usually require you to understand the intricate relationships between subnets, route tables, and internet gateways? Handled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Focus on Code&lt;/strong&gt;&lt;br&gt;
Here's the revolutionary part: you get to be a developer again. Not a part-time network architect, not an amateur security group debugger, not someone who needs to explain to their non-technical manager why deploying a container takes longer than writing the actual application. Express Mode lets you do what you actually signed up for— writing code and shipping features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Production-Ready Defaults&lt;/strong&gt;&lt;br&gt;
Express Mode doesn't just throw something together and wish you luck. You get secure, scalable configurations with health checks, monitoring, and even a working HTTPS URL right out of the box. It's production-ready from day one, not "well, it works on my machine and I think it'll be fine in prod" ready. Jake can actually deploy on a Friday without his manager giving him the side-eye.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Full ECS Power Available&lt;/strong&gt;&lt;br&gt;
Here's what makes this different from other "easy mode" solutions that lock you into a walled garden: all the standard ECS capabilities are still there. When Jake inevitably needs to add a custom health check endpoint, tweak the auto-scaling thresholds for that Black Friday traffic spike, or integrate with an existing VPC, he can. Express Mode gives you the training wheels, but it doesn't weld them on permanently. It's the fast path to getting started, not a dead end.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xa4kmssjcty1h7g2scb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xa4kmssjcty1h7g2scb.png" alt="Demo" width="612" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Demo Time!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now let’s get hands-on by deploying a simple nginx container. Follow these steps to deploy your containerized application in minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;br&gt;
Before you start, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A container image stored in Amazon ECR (or a private registry with Secrets Manager configured)&lt;/li&gt;
&lt;li&gt;An AWS account with appropriate permissions&lt;/li&gt;
&lt;li&gt;Your application's container port number (e.g., 80, 8080, 3000)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step Deployment
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Navigate to ECS Express Mode&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the AWS Console and navigate to the Amazon ECS console &lt;/li&gt;
&lt;li&gt;In the left navigation pane, click on &lt;strong&gt;Express mode&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Create&lt;/strong&gt; button to start your deployment&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupmu0fdx65ufezpifb5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupmu0fdx65ufezpifb5t.png" alt="Express Mode" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Configure Your Container Image&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. Image URI&lt;/strong&gt;: Enter your container image URI from Amazon ECR&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;Browse ECR images&lt;/strong&gt; to easily find your image&lt;/li&gt;
&lt;li&gt;Select your repository and image&lt;/li&gt;
&lt;li&gt;Choose how to identify the image (AWS recommends using **Image digest **for consistency)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Service Name (Optional)&lt;/strong&gt;: Give your service a descriptive name&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you leave this blank, Express Mode generates one from your image name&lt;/li&gt;
&lt;li&gt;This name appears in your Application URL and across AWS resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf740sh88aitqtwz2zt9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf740sh88aitqtwz2zt9.png" alt="Express Mode" width="800" height="1058"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Configure Application Settings&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Container port&lt;/strong&gt;: Enter the port your application listens on (default is 80)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Health check path&lt;/strong&gt;: Specify the endpoint for health checks (default is "/")&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If your app has a dedicated health endpoint like /health or /api/health, use that&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keep health checks lightweight—avoid expensive database queries or external API calls&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Add Environment Variables (Optional)&lt;/strong&gt;&lt;br&gt;
If your application needs environment variables (database URLs, API keys, feature flags), add them here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;Add environment variable&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enter key-value pairs (e.g., DATABASE_URL, API_KEY)&lt;/li&gt;
&lt;li&gt;For sensitive values, consider using AWS Secrets Manager references instead&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Override Container Command (Optional)&lt;/strong&gt;&lt;br&gt;
The &lt;strong&gt;Command&lt;/strong&gt; field lets you override the Docker CMD instruction from your Dockerfile:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Enter comma-delimited commands and parameters (e.g., echo,hello world)&lt;/li&gt;
&lt;li&gt;This is useful when you want to run different commands without rebuilding your container image &lt;/li&gt;
&lt;li&gt;Leave blank to use the default CMD from your Dockerfile&lt;/li&gt;
&lt;li&gt;Common use cases: running migration scripts, starting with different flags, or debugging&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Configure Task Role (Optional but Recommended)&lt;/strong&gt;&lt;br&gt;
The &lt;strong&gt;Task role&lt;/strong&gt; is different from the Task Execution Role you configured earlier.&lt;br&gt;
If your application needs to interact with AWS services (S3, DynamoDB, SQS, etc.):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Choose the task role&lt;/strong&gt; dropdown&lt;/li&gt;
&lt;li&gt;Select an existing IAM role or click &lt;strong&gt;Create new role&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Grant only the permissions your application actually needs (principle of least privilege) &lt;/li&gt;
&lt;li&gt;Example: If your app uploads files to S3, attach a policy with s3:PutObject permissions
&lt;strong&gt;Note&lt;/strong&gt;: Without a Task role, your application can't make authenticated AWS API calls.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 7: Configure Compute Resources&lt;/strong&gt;&lt;br&gt;
Express Mode uses AWS Fargate by default, so no EC2 instances to manage. But you still need to size your containers appropriately:&lt;br&gt;
&lt;strong&gt;CPU and Memory:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1. CPU:&lt;/strong&gt; Select the vCPU allocation for your task (0.25, 0.5, 1, 2, 4, 8, or 16 vCPU)&lt;br&gt;
&lt;strong&gt;2. Memory:&lt;/strong&gt; Choose memory based on your CPU selection&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each CPU tier has specific memory options&lt;/li&gt;
&lt;li&gt;For example, 1 vCPU supports 2GB to 8GB of memory&lt;/li&gt;
&lt;li&gt;Start conservative and scale up if needed—you can always adjust later&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 8: Configure Auto Scaling&lt;/strong&gt;&lt;br&gt;
Express Mode automatically sets up intelligent auto-scaling, but you can customize it:&lt;br&gt;
&lt;strong&gt;ECS Service Metric:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the metric that triggers scaling (default: &lt;strong&gt;Average CPU utilization&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;Other options include average memory utilization and request count per target&lt;/li&gt;
&lt;li&gt;CPU is usually the right choice for most web applications
Express Mode uses target tracking scaling, which means it continuously adjusts your task count to maintain your target metric. When traffic drops, it automatically scales down to save costs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 9: Customize Networking (Optional)&lt;/strong&gt;&lt;br&gt;
By default, Express Mode uses your default VPC and handles all networking automatically. But if you need custom networking:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check &lt;strong&gt;Customize networking configurations&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select your preferred &lt;strong&gt;VPC&lt;/strong&gt; from the dropdown&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Subnets&lt;/strong&gt; where your services will run&lt;/li&gt;
&lt;li&gt;Configure &lt;strong&gt;Security groups&lt;/strong&gt; if you need additional inbound access&lt;/li&gt;
&lt;li&gt;Express Mode creates sensible security group rules by default&lt;/li&gt;
&lt;li&gt;Only customize if you have specific requirements&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 10: Configure Logging (Optional)&lt;/strong&gt;&lt;br&gt;
Express Mode automatically sets up CloudWatch Logs, but you can customize:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;CloudWatch log group name&lt;/strong&gt;: Default is based on your cluster and service names&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log stream prefix&lt;/strong&gt;: Default is ecs&lt;/li&gt;
&lt;li&gt;Adjust these if you have specific logging conventions in your organization&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 11: Add Tags (Optional)&lt;/strong&gt;&lt;br&gt;
Tag your resources for cost tracking and organization:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add key-value pairs like Environment: Production or Team: Backend&lt;/li&gt;
&lt;li&gt;These tags apply to all resources Express Mode creates&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 12: Deploy!&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Review your configuration&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Sit back and watch Express Mode work its magic &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What Happens Next?&lt;/strong&gt;&lt;br&gt;
Once you click Create, Express Mode automatically provisions (See attached Screenshot):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F843n9x514gyk4hxzbats.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F843n9x514gyk4hxzbats.png" alt="Creating" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;✅ ECS cluster (or uses the default one)&lt;/p&gt;

&lt;p&gt;✅ ECS service with Fargate tasks&lt;/p&gt;

&lt;p&gt;✅ Application Load Balancer with health checks&lt;/p&gt;

&lt;p&gt;✅ Auto-scaling policies based on CPU utilization&lt;/p&gt;

&lt;p&gt;✅ Security groups and networking configuration&lt;/p&gt;

&lt;p&gt;✅ A custom HTTPS URL (automatically generated and ready to use!)&lt;/p&gt;

&lt;p&gt;✅ CloudWatch Logs for application monitoring&lt;/p&gt;

&lt;p&gt;Once complete, you'll see your &lt;strong&gt;Application URL&lt;/strong&gt; — click it to access your running application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgechks6zmbhrw6z8s4no.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgechks6zmbhrw6z8s4no.png" alt="Website" width="800" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To delete the service with all the resources created, simply click the “&lt;strong&gt;Delete Service&lt;/strong&gt;” button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivueocilu2nmhmegud38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivueocilu2nmhmegud38.png" alt="Deleting" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CONCLUSION
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr1lnfq09sdmvjpirb19.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr1lnfq09sdmvjpirb19.png" alt="Relieved!" width="612" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From Infrastructure Nightmare to Deployment Dream&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One thing makes the new Amazon ECS Express Mode great—it meets developers where they are: A developer can safely say "I have a container, I want it running, and I'd like to go home before midnight." The beauty isn't just the 10-minute deployment; it's the mental overhead it eliminates. You no longer need to be a VPC expert just to ship an API. But here's the genius: when you need fine-grained control later, it's all still there. Express Mode isn't a walled garden—it's the on-ramp to production-ready infrastructure without the pain. Jake deployed in 10 minutes, went home on time, and when his manager asked how it went, he just shrugged and said, "Easy." For the first time in his AWS journey, he wasn't lying.  &lt;/p&gt;

&lt;p&gt;All Images courtesy &lt;a href="https://unsplash.com/" rel="noopener noreferrer"&gt;unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ecs</category>
      <category>docker</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Key Notes from Migrating 7 Microservices to Amazon ECS</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Wed, 22 Oct 2025 00:08:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/key-notes-from-migrating-7-microservices-to-amazon-ecs-5anm</link>
      <guid>https://dev.to/aws-builders/key-notes-from-migrating-7-microservices-to-amazon-ecs-5anm</guid>
      <description>&lt;p&gt;I recently lead a project where I migrated a customer’s application backend to AWS. Its an e-commerce/marketplace platform with its core backend built with a microservices architecture. &lt;br&gt;
Here are key notes to consider when migrating a similar workload.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Infrastructure as Code is Essential&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Terraform with modular architecture for consistent deployments&lt;/li&gt;
&lt;li&gt;Separate modules for ECS services, task definitions, and target groups&lt;/li&gt;
&lt;li&gt;Environment-specific configurations (e.g prod.tfvars) for scalability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Service Discovery &amp;amp; Communication&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implemented AWS Service Connect for inter-service communication&lt;/li&gt;
&lt;li&gt;Both HTTP and gRPC endpoints configured for each service&lt;/li&gt;
&lt;li&gt;Service mesh approach eliminates hardcoded service endpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Load Balancing Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single ALB with path-based routing e.g (/admin/* , /order/* , etc.)&lt;/li&gt;
&lt;li&gt;Priority-based listener rules for 7 services&lt;/li&gt;
&lt;li&gt;Cost-effective compared to individual load balancers per service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Security &amp;amp; Configuration Management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Systems Manager Parameter Store for environment variables&lt;/li&gt;
&lt;li&gt;IAM roles with least-privilege access per service&lt;/li&gt;
&lt;li&gt;Secrets management separated from application code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Observability &amp;amp; Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized logging with CloudWatch Log Groups&lt;/li&gt;
&lt;li&gt;Service-specific log streams for easier debugging&lt;/li&gt;
&lt;li&gt;Health check endpoints for each service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;6. Deployment &amp;amp; Scaling Considerations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fargate for serverless container management&lt;/li&gt;
&lt;li&gt;Circuit breaker pattern for deployment safety&lt;/li&gt;
&lt;li&gt;Auto-scaling capabilities &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;7. Network Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-AZ deployment across 3 availability zones&lt;/li&gt;
&lt;li&gt;Public subnets for ALB, private subnets for services&lt;/li&gt;
&lt;li&gt;Security groups for network-level isolation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;8. Service Organization&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clear service boundaries: admin, notification, order, ship, store, user, wallet&lt;/li&gt;
&lt;li&gt;Consistent naming conventions and tagging strategy&lt;/li&gt;
&lt;li&gt;Modular Terraform structure for maintainability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;9. Container Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECR for private container registry&lt;/li&gt;
&lt;li&gt;Standardized port configurations (HTTP + gRPC)&lt;/li&gt;
&lt;li&gt;Resource allocation (CPU/memory) per service requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;10. Operational Excellence&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions for CI/CD automation&lt;/li&gt;
&lt;li&gt;Terraform state management for team collaboration&lt;/li&gt;
&lt;li&gt;Parameter scripts for environment setup automation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These lessons demonstrate a well-architected microservices migration focusing on scalability, security, and operational efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fargate vs EC2-Based ECS: Key Migration Lessons&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Fargate Was Chosen&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Operational Simplicity&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No EC2 instance management (patching, scaling, monitoring)&lt;/li&gt;
&lt;li&gt;AWS handles underlying infrastructure completely&lt;/li&gt;
&lt;li&gt;Eliminated capacity provider complexity &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Cost Optimization for Microservices&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pay-per-task model better for 7 small services&lt;/li&gt;
&lt;li&gt;No idle EC2 capacity costs&lt;/li&gt;
&lt;li&gt;Right-sizing at container level (256 CPU, 512 MB per service)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Security Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No SSH access or EC2 security hardening needed&lt;/li&gt;
&lt;li&gt;Automatic security patches by AWS&lt;/li&gt;
&lt;li&gt;Network isolation at task level with awsvpc mode&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Simplified Networking&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each task gets its own ENI&lt;/li&gt;
&lt;li&gt;Direct security group assignment to tasks&lt;/li&gt;
&lt;li&gt;No port mapping conflicts between services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Auto-scaling Efficiency&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Task-level scaling vs instance-level&lt;/li&gt;
&lt;li&gt;Faster cold starts for microservices&lt;/li&gt;
&lt;li&gt;No pre-provisioned capacity waste&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Trade-offs Accepted&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Higher Per-vCPU Cost&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fargate costs ~40% more than EC2 equivalent&lt;/li&gt;
&lt;li&gt;Justified by reduced operational overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Limited Customizatio&lt;/strong&gt;n&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No access to underlying OS&lt;/li&gt;
&lt;li&gt;Can't install custom agents or tools&lt;/li&gt;
&lt;li&gt;Fixed networking and storage options&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Cold Start Considerations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Slightly longer task startup times&lt;/li&gt;
&lt;li&gt;Mitigated with health check grace periods ( in my case 120s)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Fargate Sweet Spot&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For 7 lightweight microservices with variable traffic, Fargate eliminated infrastructure management complexity while providing better resource utilization than maintaining EC2 instances that would often run underutilized.&lt;/p&gt;

</description>
      <category>containers</category>
      <category>docker</category>
      <category>aws</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Deploying A Dockerized Golang App To AWS App Runner</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Sat, 06 Sep 2025 00:24:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-a-dockerized-golang-app-to-aws-app-runner-1hnn</link>
      <guid>https://dev.to/aws-builders/deploying-a-dockerized-golang-app-to-aws-app-runner-1hnn</guid>
      <description>&lt;p&gt;AWS App Runner is a fully managed container application service offered by Amazon Web Services (AWS). It is designed to simplify the process of building, deploying, and scaling containerized web applications and API services. &lt;/p&gt;

&lt;p&gt;In this demo, we will deploy a containerised golang app with a mongodb database running on EC2, to AWS AppRunner.&lt;/p&gt;

&lt;p&gt;To begin, we will clone the &lt;a href="https://github.com/victordickson/tasky" rel="noopener noreferrer"&gt;project repo&lt;/a&gt; on github to our local machine. The project is a task management app that saves data to a mongodb database.&lt;/p&gt;

&lt;p&gt;The project contains a &lt;a href="https://github.com/victordickson/tasky/blob/main/Dockerfile" rel="noopener noreferrer"&gt;dockerfile&lt;/a&gt;. We will use github actions to build and push the docker image to Amazon ECR. Ensure you have created the ecr repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build and Push to ECR

on:
  push:
    branches: ['main']

env:
  AWS_REGION: 'eu-west-1'
  ECR_REPOSITORY: 'tasky'
  ACCOUNT_ID: '&amp;lt;fill in&amp;gt;'
  ROLE_NAME: 'github-actions-role'

permissions:
  id-token: write
  contents: read

jobs:
  build-and-push:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::${{ env.ACCOUNT_ID }}:role/${{ env.ROLE_NAME }}
          role-session-name: github_action_session
          aws-region: ${{ env.AWS_REGION }}

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Build and push image
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
        run: |
          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:latest .
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html" rel="noopener noreferrer"&gt;AWS strongly recommends OIDC (OpenID Connect) over long-term access keys&lt;/a&gt;, we will configure oidc for the role 'github-actions-role'.&lt;/p&gt;

&lt;p&gt;First, Create the OIDC provider:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-open-id-connect-provider \
  --url https://token.actions.githubusercontent.com \
  --client-id-list sts.amazonaws.com \
  --thumbprint-list 6938fd4d98bab03faadb97b34396831e3780aea1 \
  --profile 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, configure an IAM role “github-actions-role“ with an IAM policy that gives github actions runner the required permssion to push to ECR, add a trust policy similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::&amp;lt;Account-ID&amp;gt;:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
        },
        "StringLike": {
          "token.actions.githubusercontent.com:sub": "repo:&amp;lt;github-username&amp;gt;/tasky:ref:refs/heads/main"
        }
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Push your changes to the github repo and the workflow should be triggered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5uij0xnrxzfaqy0gw77.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5uij0xnrxzfaqy0gw77.png" alt="Fig. 1: Github Workflow" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the ECR console to view the docker image in the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkzsquxz9v0w4bix31k7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkzsquxz9v0w4bix31k7.png" alt="Fig. 2: Docker image on ECR" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying The MongoDB instance on EC2
&lt;/h2&gt;

&lt;p&gt;Navigating to the EC2 console and launch a t3.micro ubuntu instance. Ensure to have the following in your Security Group Configuration apart from SSH port 22:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Type: Custom TCP
Port: 27017
Source: 0.0.0.0/0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Connect to the instance terminal and use the following commands to install MongoDB.&lt;br&gt;
First, Install the public key using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -fsSL https://pgp.mongodb.com/server-7.0.asc | sudo gpg --dearmor -o /usr/share/keyrings/mongodb-server-7.0.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, Add sources (Mongo 7.0 repo).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reload the local package database and install the MongoDB packages using the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update
sudo apt install -y mongodb-org
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check if MongoDB is installed or not, verify using the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongod --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following commands to start mongodb.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start mongod
sudo systemctl enable mongod
sudo systemctl status mongod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see an output similar to the following;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk698sdfkpvsv8gxa6iq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftk698sdfkpvsv8gxa6iq.png" alt="Fig.3: MongoDB service status" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a MongoDB User and Password
&lt;/h2&gt;

&lt;p&gt;MongoDB doesn't have a default password when installed on Ubuntu. By default, MongoDB runs without authentication enabled.&lt;/p&gt;

&lt;p&gt;Connect to MongoDB (no password needed initially):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongosh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create an admin user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;use admin
db.createUser({
  user: "admin",
  pwd: "yourstrongpassword",
  roles: ["userAdminAnyDatabase", "readWriteAnyDatabase"]
})

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  MongoDB Configuration:
&lt;/h2&gt;

&lt;p&gt;By default, the MongoDB server (mongod) only allows loopback connections from IP address 127.0.0.1 (localhost). To allow connections from elsewhere in your Amazon VPC, do the following:&lt;/p&gt;

&lt;p&gt;Edit the &lt;code&gt;/etc/mongod.conf&lt;/code&gt; file and look for the following lines.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# network interfaces
net:
  port: 27017
  bindIp: public-dns-name  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt;: Replace &lt;code&gt;public-dns-name&lt;/code&gt; with the actual public DNS name for your instance, for example &lt;code&gt;ec2-11-22-33-44.us-west-2.compute.amazonaws.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To enable authentication in config, Add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;security:
  authorization: enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, Restart MongoDB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl restart mongod
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating The AppRunner Service
&lt;/h2&gt;

&lt;p&gt;Navigate to the Apprunner console, click “Create Service“ button&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ahmtkgii4tfovqlxpnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ahmtkgii4tfovqlxpnj.png" alt="Fig.5: Creating apprunner service" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click “&lt;strong&gt;Next&lt;/strong&gt;” and on the “&lt;strong&gt;Configure service&lt;/strong&gt;” page, fill in the required environment variables such as below;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NOTE:
When specifying the hostname in the MONGODB_URI, ensure you use the Private IP and NOT the Public IP of the host EC2 instance
Also, update the Port as necessary (our golang app is exposed on port 8080)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0q0avv4hwawgmck6qu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0q0avv4hwawgmck6qu2.png" alt="Fig.6: Creating apprunner service" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the “Networking” section and under “Outgoing network traffic”, click “Custom VPC”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkl8mwaj9jnrj13gvp05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkl8mwaj9jnrj13gvp05.png" alt="Fig.7: Creating apprunner service" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create custom vpc endpoint, follow this &lt;a href="https://dev.to/aws-builders/how-to-connect-aws-apprunner-service-to-rds-and-ec2-5fkk"&gt;guide&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;If the deployment is successful and service status changes to “Running” state as shown below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqt3o3w8h30k2sj7q802.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqt3o3w8h30k2sj7q802.png" alt="Fig.8: Apprunner service" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the Default Domain URL and paste on your browser&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxk0km1wivjndct8odme0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxk0km1wivjndct8odme0.png" alt="Fig.9: App UI" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Proceed to sign up and you will see a daily task management app like so;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7wjj7omexkp8wxehilu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7wjj7omexkp8wxehilu.png" alt="to-do" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go ahead to play around with it. Data is persisted in mongodb so you can log out and back in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clean Up
&lt;/h2&gt;

&lt;p&gt;To clean up resources, simply delete the EC2 instance as well as the App Runner service.&lt;/p&gt;

&lt;p&gt;References&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/dms/latest/sbs/chap-mongodb2documentdb.02.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/dms/latest/sbs/chap-mongodb2documentdb.02.html&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.geeksforgeeks.org/installation-guide/how-to-install-mongodb-on-aws-ec2-instance/" rel="noopener noreferrer"&gt;https://www.geeksforgeeks.org/installation-guide/how-to-install-mongodb-on-aws-ec2-instance/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/apprunner/" rel="noopener noreferrer"&gt;https://aws.amazon.com/apprunner/&lt;/a&gt; &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>apprunner</category>
      <category>docker</category>
      <category>go</category>
    </item>
    <item>
      <title>How To Connect AWS AppRunner Service To RDS And EC2</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Thu, 28 Aug 2025 15:18:34 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-connect-aws-apprunner-service-to-rds-and-ec2-5fkk</link>
      <guid>https://dev.to/aws-builders/how-to-connect-aws-apprunner-service-to-rds-and-ec2-5fkk</guid>
      <description>&lt;p&gt;AWS App Runner is a fully managed container application service provided by Amazon Web Services (AWS). It simplifies the process of building, deploying, and running containerized web applications and API services. &lt;/p&gt;

&lt;h2&gt;
  
  
  VPC Connector
&lt;/h2&gt;

&lt;p&gt;A VPC connector in AWS App Runner enables your App Runner service to establish outbound connections to resources located within a private Amazon Virtual Private Cloud (VPC). This allows your App Runner application to securely access private resources such as databases (e.g., Amazon RDS), other services running on Amazon EC2 or ECS, or internal APIs that are not exposed to the public internet.&lt;/p&gt;

&lt;p&gt;Fig.1 : Architecture diagram for Connection to RDS&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9b983bdyje964wv325l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9b983bdyje964wv325l.png" alt="Fig.1 : Architecture diagram for Connection to RDS" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fig.2 : Architecture diagram for Connection to EC2&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxx3n1l4zm1fxbvzw630.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxx3n1l4zm1fxbvzw630.png" alt="Fig.2 : Architecture diagram for Connection to EC2" width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How To Create VPC Connector&lt;/strong&gt;&lt;br&gt;
You can associate your service with a VPC by creating a VPC endpoint from the App Runner console, called VPC Connector. To create a VPC Connector, specify the VPC, one or more subnets, and optionally one or more security groups. After you configure a VPC Connector, you can use it with one or more App Runner services.&lt;/p&gt;

&lt;p&gt;Look for the &lt;strong&gt;Networking&lt;/strong&gt; configuration section on the console page. For &lt;strong&gt;Outgoing network traffic&lt;/strong&gt;, choose in the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Public access&lt;/strong&gt;: To associate your service with public endpoints of other AWS services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom VPC&lt;/strong&gt;: To associate your service with a VPC from Amazon VPC. Your application can connect with and send messages to other applications that are hosted in an Amazon VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To enable Custopm VPC,&lt;/p&gt;

&lt;p&gt;Open the App Runner console, and in the &lt;strong&gt;Regions&lt;/strong&gt; list, select your AWS Region.&lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;Networking&lt;/strong&gt; section under &lt;strong&gt;Configure service&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Fig.3: Navigating to Networking Section&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1j0c731sgpkt2phtgu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1j0c731sgpkt2phtgu6.png" alt="Fig.3: Navigating to Networking Section" width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Custom VPC&lt;/strong&gt;, for &lt;strong&gt;Outgoing network traffic&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In the navigation pane, choose &lt;strong&gt;VPC connector&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you created the VPC connectors, the console displays a list of VPC connectors in your account. You can choose an existing VPC connector and choose &lt;strong&gt;Next&lt;/strong&gt; to review your configuration. Then, move to the last step. Alternatively, you can add a new VPC connector using the following steps.&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Add new&lt;/strong&gt; to create a new VPC connector for your service.&lt;/p&gt;

&lt;p&gt;Then, the &lt;strong&gt;Add new VPC connector&lt;/strong&gt; dialog box opens.&lt;/p&gt;

&lt;p&gt;Fig. 4: Creating VPC Connector&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgxgjwlggi53wt7587k5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgxgjwlggi53wt7587k5.png" alt="Fig. 4: Creating VPC Connector" width="800" height="819"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter a name for your VPC connector and select the required VPC from the available list.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;Subnets&lt;/strong&gt; select one subnet for each Availability Zone that you plan to access the App Runner service from. For better availability, choose three subnets. Or, if there are less than three subnets, choose all available subnets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Note:
Make sure you assign private subnets to the VPC connector. If you assign public subnets to VPC connector, your service fails to create or rolls back automatically during an update.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Optional) For &lt;strong&gt;Security group&lt;/strong&gt;, select the security groups to associate with the endpoint network interfaces.&lt;/p&gt;

&lt;p&gt;(Optional) To add a tag, choose &lt;strong&gt;Add new tag&lt;/strong&gt; and enter the tag key and the tag value.&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Add&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The details of the VPC connector you created appear under &lt;strong&gt;VPC connector&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Choose Next to review your configuration, and then choose &lt;strong&gt;Create and deploy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;App Runner creates a VPC connector resource for you, and then associates it with your service. If the service is successfully created, the console shows the service dashboard, with a &lt;strong&gt;Service overview&lt;/strong&gt; of the new service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NOTE:
When specifying the IP address to connect AppRunner with an EC2 instance, ensure you use the Private IP and NOT the Public IP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Connecting AppRunner Service to an RDS Instance with Public Access&lt;/strong&gt;&lt;br&gt;
When an RDS instance is configured for public access, it gets a public IP address and can be reached directly over the internet. In this case, your App Runner service can connect to the RDS instance directly without needing a VPC connector, since the database is accessible from outside the VPC.&lt;/p&gt;

</description>
      <category>apprunner</category>
      <category>serverless</category>
      <category>security</category>
      <category>containers</category>
    </item>
    <item>
      <title>How Amazon Q Stands Out: A Comparison with Microsoft Copilot and Google Gemini</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Sun, 10 Nov 2024 19:03:38 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-amazon-q-stands-out-a-comparison-with-microsoft-copilot-and-google-gemini-1bj</link>
      <guid>https://dev.to/aws-builders/how-amazon-q-stands-out-a-comparison-with-microsoft-copilot-and-google-gemini-1bj</guid>
      <description>&lt;p&gt;Amazon Q is a generative AI assistant that helps businesses and developers automate tasks, improve workflows, and respond to natural language requests. Built on AWS’s powerful infrastructure, Amazon Q is designed for high performance, scalability, and easy integration. &lt;/p&gt;

&lt;p&gt;It can understand language, generate natural responses, and assist with tasks like coding, data analysis, and workflow management.&lt;/p&gt;

&lt;p&gt;It works not only with AWS tools and services but also with various third-party tools, making it easy for businesses to incorporate it into their existing systems.&lt;/p&gt;

&lt;p&gt;In this post, we will be considering a comparison between Amazon Q and its major competitors such as microsoft copilot and google's gemini. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Q Business&lt;/strong&gt;&lt;br&gt;
Amazon Q Business is a generative AI assistant that helps your team work smarter. It can answer questions, provide summaries, generate content, and securely complete tasks based on the information in your enterprise systems. Users can request information about company policies, product information, business results, and more across data repositories. Anyone can automate generative AI tasks—from content creation to email writing and raising tickets. Amazon Q Business empowers your team to be more data-driven, creative, and productive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfffscmj495u693gq2uv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfffscmj495u693gq2uv.png" alt="Key Differentiators Battlecard" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfcc33fnwl5br9wkrgm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfcc33fnwl5br9wkrgm0.png" alt="Potential Questions &amp;amp; Answers" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Q Developer Differentiated Value Propositions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Q Developer is the AI assistant for your software development and AWS management tasks. It’s an expert on building AWS applications—from coding, testing and upgrading, to troubleshooting and optimizing your AWS resources. &lt;br&gt;
Amazon Q Developer is available wherever you need it: in your IDE, in the AWS Console, in Slack, and more. Going beyond inline code recommendations, Q can help you understand, debug, and optimize your existing code through a conversational interface. And, it can save time by generating and implementing new features or by upgrading applications to newer versions in minutes. &lt;br&gt;
In the AWS console, Amazon Q Developer can help you learn about AWS services and architectural best practices, troubleshoot service errors and networking issues, select instances, and optimize your SQL queries and ETL pipelines. You won’t find this broad set of abilities in any other companion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1htjusvji257xj2t5sk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1htjusvji257xj2t5sk.png" alt="Amazon Q Developer Technical DIfferentiators" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;More Value Across SDLC&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Entire stack of generative AI functionality to the IDE&lt;/li&gt;
&lt;li&gt;Most competitive pricing, for $19 per user per month&lt;/li&gt;
&lt;li&gt;Best Assistant for Building on AWS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Integration with popular IDEs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Native integration to AWS console&lt;/li&gt;
&lt;li&gt;Faster Innovation with Advanced Features&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Code Transformation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Responsible Coding&lt;/li&gt;
&lt;li&gt;.NET Code Migrations from Windows to Linux (upcoming)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrp6ck59xxew8sc8w3so.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbrp6ck59xxew8sc8w3so.png" alt="Potential Questions &amp;amp; Answers" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In summary, Amazon Q stands out as a powerful AI assistant designed to streamline software development and AWS management tasks. With its broad capabilities—from coding and debugging to SQL optimization and architectural guidance—Amazon Q provides a robust solution that integrates seamlessly with both AWS and popular third-party tools. Its competitive pricing and advanced features like code transformation and responsible coding make it an attractive option for organizations seeking to enhance productivity across the software development lifecycle. For businesses ready to harness AI-driven efficiencies, Amazon Q is a tool worth exploring.&lt;/p&gt;

</description>
      <category>amazon</category>
      <category>ai</category>
      <category>gemini</category>
    </item>
    <item>
      <title>How to Deploy Jenkins Server on AWS EC2 with Terraform</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Sun, 19 Nov 2023 21:22:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-deploy-jenkins-server-on-aws-ec2-with-terraform-19pm</link>
      <guid>https://dev.to/aws-builders/how-to-deploy-jenkins-server-on-aws-ec2-with-terraform-19pm</guid>
      <description>&lt;p&gt;Hi there! In this demo, I'll show you how to automate the installation of Jenkins and other softwares such as &lt;a href="https://opensource.com/resources/what-ansible" rel="noopener noreferrer"&gt;ansible&lt;/a&gt;, docker,AWS CLI and terraform  on the AWS EC2 instance created using terraform. Jenkins is a popular open source continuous integration/continuous delivery and deployment (CI/CD) automation software DevOps tool. It is used to implement CI/CD workflows, called pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BEFORE YOU BEGIN&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;Install AWS CLI&lt;/a&gt; and configure credentials.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linuxbuzz.com/install-terraform-on-ubuntu/" rel="noopener noreferrer"&gt;Install Terraform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Clone the &lt;a href="https://github.com/victordickson/jenkins-server-EC2" rel="noopener noreferrer"&gt;repository&lt;/a&gt; and "cd" into the folder.&lt;/li&gt;
&lt;li&gt;Create an EC2 keypair in your desired region and download it( I used us-east-1 and named it demoKP respectively in this demo) and ensure it's in the specified path( check the variables.tf file)
Feel free to go through the cloned folder and peruse through the files. The Userdata.sh contains the scripts for the  installation while the main.tf creates the EC2 infrastructure.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;CREATING THE INFRASTRUCTURE&lt;/strong&gt;&lt;br&gt;
Run the following commands to begin the provisioning.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform plan
terraform apply --auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0a2t87knr26wflancgp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0a2t87knr26wflancgp.png" alt="Output" width="800" height="151"&gt;&lt;/a&gt;&lt;br&gt;
Copy the url in the output to a browser&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0s644oit8sf9lt4n523.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0s644oit8sf9lt4n523.png" alt="Browser" width="800" height="422"&gt;&lt;/a&gt;&lt;br&gt;
As suggested in the screenshot, ssh into the instance and run the command,&lt;strong&gt;sudo cat /var/lib/jenkins/secrets/initialAdminPassword&lt;/strong&gt; in the terminal and paste the output into the box.( Don't forget to set the right permission on the keypair file).&lt;br&gt;
It takes you to a getting started page, go ahead to click "&lt;strong&gt;Install suggested plugins&lt;/strong&gt;"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbgi9n6cpdlxxyxix2vvh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbgi9n6cpdlxxyxix2vvh.png" alt="Configure user" width="800" height="422"&gt;&lt;/a&gt;&lt;br&gt;
Go ahead to configure an admin user. You can use a dummy email address. On the next page, click the button &lt;strong&gt;Save and Finish&lt;/strong&gt; and then &lt;strong&gt;Start using Jenkins&lt;/strong&gt;&lt;br&gt;
And then, taadahh! Jenkins is ready!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mt9so1w8q7m2b4btclo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9mt9so1w8q7m2b4btclo.png" alt="Jenkins" width="800" height="422"&gt;&lt;/a&gt;&lt;br&gt;
Go ahead to use it for your CI/CD workloads&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DESTROY INFRASTRUCTURE&lt;/strong&gt;&lt;br&gt;
To destroy the infrastructure created with terraform, simply run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy --auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's How to deploy Jenkins server on AWS EC2 with Terraform. Thanks for reading.&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>terraform</category>
      <category>cicd</category>
      <category>devops</category>
    </item>
    <item>
      <title>How to Manually Install Jenkins on AWS EC2 Linux</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Fri, 07 Apr 2023 14:37:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-manually-install-jenkins-on-linux-5a3l</link>
      <guid>https://dev.to/aws-builders/how-to-manually-install-jenkins-on-linux-5a3l</guid>
      <description>&lt;p&gt;Have you tried installing jenkins on ubuntu server severally without success? Do you get the following error message when you try to install jenkins?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Reading package lists... Done
Building dependency tree       
Reading state information... Done
Package jenkins is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'jenkins' has no installation candidate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, its time to install manually. Follow the steps listed below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Java: Jenkins requires Java to run. You can install OpenJDK 8 using the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update
sudo apt-get install openjdk-8-jdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Download the Jenkins WAR file: Go to the official Jenkins website and download the latest Jenkins WAR file. You can use the following command to download the latest WAR file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://updates.jenkins-ci.org/latest/jenkins.war
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start Jenkins: Start Jenkins using the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java -jar jenkins.war
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Access Jenkins: Open a web browser and navigate to &lt;a href="http://public-ip:8080" rel="noopener noreferrer"&gt;http://public-ip:8080&lt;/a&gt; to access the Jenkins web interface.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9wmqqdz0b09bdgf387s.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9wmqqdz0b09bdgf387s.PNG" alt="jenkins" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Complete the setup: Follow the instructions on the Jenkins setup screen to complete the installation process.
To obtain the initial passwod, run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /home/clouduser/.jenkins/secrets/initialAdminPassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: By default, Jenkins will run on port 8080. If port 8080 is already in use on your system, you can specify a different port using the --httpPort option when starting Jenkins. For example, to start Jenkins on port 9090, you can use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java -jar jenkins.war --httpPort=9090
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I hope these steps help you to install Jenkins manually.&lt;/p&gt;

</description>
      <category>jenkins</category>
      <category>devops</category>
      <category>cicd</category>
      <category>automation</category>
    </item>
    <item>
      <title>How to Deploy WordPress to Amazon EKS using Helm</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Sun, 15 Jan 2023 21:48:21 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-deploy-wordpress-to-amazon-eks-using-helm-5dli</link>
      <guid>https://dev.to/aws-builders/how-to-deploy-wordpress-to-amazon-eks-using-helm-5dli</guid>
      <description>&lt;p&gt;In this article, you will learn how to deploy a production ready wordpress site to Amazon Elastic Kubernetes Service(EKS) with just a few easy steps. This is made possible using a package manager known as helm.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Helm?
&lt;/h2&gt;

&lt;p&gt;Helm is a Kubernetes deployment tool for automating creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters. It is an open source package manager that automates the deployment of software for Kubernetes in a simple, consistent way. Helm deploys packaged applications to Kubernetes and structures them into charts. The charts contain all pre-configured application resources along with all the versions into one easily manageable package.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Helm Chart?
&lt;/h2&gt;

&lt;p&gt;Helm charts are Helm packages consisting of YAML files and templates which convert into Kubernetes manifest files. These charts are reusable by anyone for any environment, which reduces complexity and duplicates.&lt;/p&gt;

&lt;p&gt;For this demo, I made use of the &lt;a href="https://github.com/bitnami/charts/tree/main/bitnami/wordpress/#installing-the-chart" rel="noopener noreferrer"&gt;helm chart for wordpress&lt;/a&gt; developed and maintained by &lt;a href="https://bitnami.com/" rel="noopener noreferrer"&gt;bitnami&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Ensure that you have the following utilities installed and configured on your machine.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html" rel="noopener noreferrer"&gt;EKSCTL&lt;/a&gt;, &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;HELM&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  1. Provision the EKS cluster
&lt;/h2&gt;

&lt;p&gt;Run the following code on your terminal after placing the value for your preferred region and existing keypair name&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF&amp;gt;&amp;gt;demo-eks-cluster.yaml
----
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: demo-eks-cluster
  region: your-region
  version: "1.21"

managedNodeGroups:
  - name: dev-ng-1
    instanceType: t3.large
    minSize: 1
    maxSize: 1
    desiredCapacity: 1
    volumeSize: 20
    volumeEncrypted: true
    volumeType: gp3
    ssh:
      allow: true
      publicKeyName: your-keypair-name
    tags:
      Env: Dev
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy
      withAddonPolicies:
        autoScaler: true
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To confirm that the nodes have been provisioned, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Deploy Wordpress to EKS
&lt;/h2&gt;

&lt;p&gt;First, add the helm repo to your environment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add my-repo https://charts.bitnami.com/bitnami
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, install the &lt;a href="https://github.com/bitnami/charts/tree/main/bitnami/wordpress/#installing-the-chart" rel="noopener noreferrer"&gt;helm chart for wordpress&lt;/a&gt;. For this use case,  I did overwrite the default wordpressPassword value in values.yaml using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install my-release my-repo/wordpress --set wordpressPassword=password
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After successful deployment, check the pods and see if status reads "Running"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Accessing your wordpress site
&lt;/h2&gt;

&lt;p&gt;To access your WordPress site from outside the cluster follow the steps below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get the WordPress URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export SERVICE_IP=$(kubectl get svc --namespace default wp-demo-wordpress --include "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo "WordPress URL: http://$SERVICE_IP/"
echo "WordPress Admin URL: http://$SERVICE_IP/admin"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Open a browser and access WordPress using the obtained URL.&lt;/li&gt;
&lt;li&gt;Login with the following credentials below to see your blog:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;username: user&lt;br&gt;
password: password&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzf7d8r86jgd2ldn9e9ty.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzf7d8r86jgd2ldn9e9ty.PNG" alt="Wordpress-Default-Page" width="800" height="484"&gt;&lt;/a&gt;&lt;br&gt;
Adding a /admin to the url gives the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqvb6ndqo5ig5swrlqpr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqvb6ndqo5ig5swrlqpr.PNG" alt="adding /admin" width="800" height="390"&gt;&lt;/a&gt;&lt;br&gt;
Login with the credentials and you should hit your wordpress admin page&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8t5whpif349rs6jnt3wk.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8t5whpif349rs6jnt3wk.PNG" alt="wordpress" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  4. Cleaning up
&lt;/h2&gt;

&lt;p&gt;To clean up the resources created using the eksctl utility, run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl delete cluster demo-eks-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations, your have successfully deployed a production-grade wordpress website to amazon EKS. Feel free to connect with me on &lt;a href="https://www.linkedin.com/in/dickson-victor/" rel="noopener noreferrer"&gt;linkedin&lt;/a&gt; if you have any questions.&lt;/p&gt;

</description>
      <category>wordpress</category>
      <category>eks</category>
      <category>helm</category>
      <category>devops</category>
    </item>
    <item>
      <title>Provisioning a Persistent EBS-backed Storage on Amazon EKS using Helm</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Mon, 12 Dec 2022 00:16:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/provisioning-a-persistent-ebs-backed-storage-on-amazon-eks-using-helm-4gh4</link>
      <guid>https://dev.to/aws-builders/provisioning-a-persistent-ebs-backed-storage-on-amazon-eks-using-helm-4gh4</guid>
      <description>&lt;p&gt;Deploying stateful applications on kubernetes can pose a lot of complexities. In this demo, we will deploy a postgres database to AWS Elastic Kubernetes Service(EKS) and configure its persistence on Amazon Elastic Block Store(EBS). We will be using Helm, a package manager, to make this process more efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;First, ensure that the following utilities are installed and properly configured on your machine.&lt;br&gt;
&lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html" rel="noopener noreferrer"&gt;EKSCTL&lt;/a&gt;&lt;br&gt;
&lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;HELM&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Create an EKS cluster&lt;/strong&gt;&lt;br&gt;
You can use either the AWS management console or EKSCTL utility to create your kubernetes cluster, for convinience, we use eksctl. &lt;br&gt;
Create a file "demo-cluster.yaml" and paste the following into it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# demo-cluster.yaml
# A cluster with two managed nodegroups
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: demo-cluster
  region: us-west-1

managedNodeGroups:
  - name: managed-ng-1
    instanceType: t3.small
    minSize: 1
    maxSize: 2

  - name: managed-ng-2
    instanceType: t3.small
    minSize: 1
    maxSize: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The file creates a kubernetes cluster named demo-cluster, with two managed nodegroups. To apply it, run;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create cluster -f demo-cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the cluster has finished provisioning, view the nodes with the command;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Create an IAM OIDC identity provider
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Determine whether you have an existing IAM OIDC provider for your cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Retrieve your cluster's OIDC provider ID and store it in a variable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oidc_id=$(aws eks describe-cluster --name demo-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Determine whether an IAM OIDC provider with your cluster's ID is already in your account.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam list-open-id-connect-providers | grep $oidc_id

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create an IAM OIDC identity provider for your cluster with the following command;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl utils associate-iam-oidc-provider --cluster demo-cluster --approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Configure a Kubernetes service account to assume an IAM role&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an IAM role and associate it with a Kubernetes service account. You can use either eksctl or the AWS CLI. Here we used the AWS CLI. 
a. Create a Kubernetes service account. Copy and paste the following contents to your terminal.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;gt;my-service-account.yaml &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ebs-csi-controller-sa
  namespace: kube-system
EOF
kubectl apply -f my-service-account.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;b. Set your AWS account ID to an environment variable with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;account_id=$(aws sts get-caller-identity --query "Account" --output text)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;c. Set the cluster's OIDC identity provider to an environment variable with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;oidc_provider=$(aws eks describe-cluster --name demo-cluster --region $AWS_REGION --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;d. Set variables for the namespace and name of the service account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export namespace=kube-system
export service_account=ebs-csi-controller-sa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;e. Run the following command on your terminal to create a trust policy file for the IAM role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;gt;aws-ebs-csi-driver-trust-policy.json &amp;lt;&amp;lt;EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::$account_id:oidc-provider/$oidc_provider"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "$oidc_provider:aud": "sts.amazonaws.com",
          "$oidc_provider:sub": "system:serviceaccount:$namespace:$service_account"
        }
      }
    }
  ]
}
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;f. Create the role "AmazonEKS_EBS_CSI_DriverRole", and my-role-description with a description for your role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-role --role-name AmazonEKS_EBS_CSI_DriverRole --assume-role-policy-document file://aws-ebs-csi-driver-trust-policy.json --description "my-role-description"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;g. Attach the required AWS managed policy to the role with the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam attach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --role-name AmazonEKS_EBS_CSI_DriverRole

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;h. Annotate your service account with the Amazon Resource Name (ARN) of the IAM role that you want the service account to assume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl annotate serviceaccount -n $namespace $service_account eks.amazonaws.com/role-arn=arn:aws:iam::$account_id:role/AmazonEKS_EBS_CSI_DriverRole
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Adding the Amazon EBS CSI add-on
&lt;/h2&gt;

&lt;p&gt;To improve security and reduce the amount of work, you can manage the Amazon EBS CSI driver as an Amazon EKS add-on. You can use eksctl, the AWS Management Console, or the AWS CLI to add the Amazon EBS CSI add-on to your cluster. To add the Amazon EBS CSI add-on using the eksctl, run the following command. Remember to replace with your account ID.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eksctl create addon --name aws-ebs-csi-driver --cluster demo-cluster --service-account-role-arn arn:aws:iam::$account_id:role/AmazonEKS_EBS_CSI_DriverRole --force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Update the worker nodes role
&lt;/h2&gt;

&lt;p&gt;Attach the policy "AmazonEBSCSIDriverPolicy" to the two worker node's roles for the cluster and also the cluster's ServiceRole. &lt;/p&gt;

&lt;h2&gt;
  
  
  6. Deploying postgres database with Helm
&lt;/h2&gt;

&lt;p&gt;Helm is a Kubernetes deployment tool for automating creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters. Kubernetes is a powerful container-orchestration system for application deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  - Define storage class
&lt;/h2&gt;

&lt;p&gt;You must define a storage class for your cluster to use and you should define a default storage class for your persistent volume claims.&lt;br&gt;
To create an AWS storage class for your Amazon EKS cluster, create an AWS storage class manifest file for your storage class. The following storage-class.yaml example defines a storageclass named "aws-pg-sc" that uses the Amazon EBS gp2 volume type.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-pg-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use kubectl to create the storage class from the manifest file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f storage-class.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following to view the available storageclasses in your cluster,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get storageclass
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  - Helm chart for postgresql
&lt;/h2&gt;

&lt;p&gt;In this demo, we will leverage the &lt;a href="https://github.com/bitnami/charts/tree/main/bitnami/postgresql/#installing-the-chart" rel="noopener noreferrer"&gt;helm chart for postgresql managed by bitnami&lt;/a&gt;. . We will be overwriting some values in values.yaml so that the chart uses the storageclass we provisioned earlier. Create a file "values-postgresdb.yaml" and paste the following into it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;primary:
   persistence:
      storageClass: "aws-pg-sc"
auth: 
   username: postgres 
   password: demo-password
   database: demo_database
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  - Installing the Chart
&lt;/h2&gt;

&lt;p&gt;To install the chart with the release name pgdb:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add my-repo https://charts.bitnami.com/bitnami
helm install pgdb --values values-postgresdb.yaml my-repo/postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the database successfully deploys, check the PV, PVC and pod created respectively with the following commands, which should give similar outputs as the following respectively;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                            STORAGECLASS   REASON   AGE

pvc-0e4020a4-8d43-4292-b30f-f57bbc4414bb   8Gi        RWO            Delete           Bound    default/data-pgdb-postgresql-0   aws-pg-sc               87s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$kubectl get pvc
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE

data-pgdb-postgresql-0   Bound    pvc-0e4020a4-8d43-4292-b30f-f57bbc4414bb   8Gi        RWO            aws-pg-sc      6h44m

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE

pgdb-postgresql-0   1/1     Running   0          16m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also verify that the persistent storage was provisioned by navigating to the AWS management console &amp;gt;&amp;gt; EC2 &amp;gt;&amp;gt; Elastic Block Store &amp;gt;&amp;gt; Volumes. The screenshot attached is the volume provisioned in my case.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvop8kz9l84eznd17kkps.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvop8kz9l84eznd17kkps.PNG" alt="provisioned-volume" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Cleaning up
&lt;/h2&gt;

&lt;p&gt;To clean up and delete the kubernetes cluster we created earlier, run the following command;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; eksctl delete cluster -f demo-cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the above command doesn't delete the cluster due to the presence of the pod, navigate to cloudformation console and manually delete each cloudformation stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2loojookd2rvm960c39.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw2loojookd2rvm960c39.jpg" alt="Undeleted-stack" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that concludes the demo on provisioning a persistent EBS-backed storage on Amazon EKS using Helm. Feel free comment below with your feedbacks.&lt;br&gt;
You can also watch the video demostration on &lt;a href="https://youtu.be/3SSdbvH5EVo" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt; &lt;/p&gt;

</description>
      <category>promptengineering</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Understanding Command Line Interfaces and Tools</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Mon, 15 Aug 2022 00:25:16 +0000</pubDate>
      <link>https://dev.to/techcrux/understanding-command-line-interfaces-and-tools-7m3</link>
      <guid>https://dev.to/techcrux/understanding-command-line-interfaces-and-tools-7m3</guid>
      <description>&lt;p&gt;Have you ever typed a text or commands in the form of lines of texts into a computer terminal? Then, you've definitely used a command line interface. What most of us are used to,  is the Graphical User Interface (GUI), which is a visual, user-friendly interface through which a user interacts with electronic devices such as computers and smartphones through the use of icons, menus and other visual indicators or representations (graphics). However, some programming and maintenance tasks may not have a graphical user interface and only use a command line. This article aims to help you understand the concept of a command line interface ( CLI for short ) and the different types of CLI tools from cloud providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Command Line Interface?
&lt;/h2&gt;

&lt;p&gt;In simple terms, a command-line interface (CLI) is a text-based user interface (UI) used to run programs, manage computer files and interact with the computer. It is a program on your computer that allows you to create and delete files, run programs, and navigate through folders and files. On a Mac and Linux, it’s called Terminal, and on Windows, it’s Command Prompt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bpzv6pvytdwua1gt1i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bpzv6pvytdwua1gt1i0.png" alt="Windows Command Prompt" width="800" height="354"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfdejeebed7uanql5jvh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfdejeebed7uanql5jvh.png" alt="Linux Terminal" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
Programmers, experienced computer users, or administrators may utilise a CLI. There are specific situations when typing text into the interface will provide faster results than simply using a GUI. Plus, CLIs can offer greater control over an operating system via succinct commands. Operating system (OS) command line interfaces are normally programs that come with the OS. There are software applications that only offer a CLI. This software delivers a prompt, the user responds, and the application reacts to the commands successively. Users enter the specific command, press “Enter”, and then wait for a response. After receiving the command, the CLI processes it accordingly and shows the output/result on the same screen; command line interpreter is used for this purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  Command Line Tools
&lt;/h2&gt;

&lt;p&gt;Developers and those with engineering responsibilities are fond of calling terminal their home. Anyone with a Linux system has to frequently interact with the Terminal in one way or the other. And customization has always been a big part of how much the Terminal can be used to improve productivity, create unique experiences, and manage the system to improve the workflow.&lt;br&gt;
Command line tools are scripts, programs, and libraries that have been created with a unique purpose, typically to solve a problem that the creator of that particular tool had himself. In other words, these tools allow programmers to compile programs and debug them, convert files, and perform a number of tasks for handling the resources required for making applications and other tools.&lt;br&gt;
There are thousands of command line tools that have been developed and  are being used in different areas such as Web Development,&lt;br&gt;
Productivity, Utility, Visual, Entertainment etcetera.&lt;/p&gt;

&lt;h2&gt;
  
  
  CLI Tools from Cloud Providers
&lt;/h2&gt;

&lt;p&gt;Cloud providers such AWS, Google Cloud, Microsoft Azure, IBM, Digital Ocean, Oracle and a host of others usually create CLI tools for interacting with their platforms through the terminal. Example of such is the AWS CLI, Google Cloud Shell, Azure cloud Shell, IBM Cloud CLI etcetera. These tools interact with the various cloud services via an API (Application Programming Interface), which is a set of programming code that enables data transmission between one software product and another.&lt;/p&gt;

&lt;p&gt;I trust you have been able to gain a better understanding of what a CLI is, feel free to add your comments in the comment box below. &lt;/p&gt;

</description>
      <category>cli</category>
      <category>programming</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Configuring An End-to-End CI/CD Pipeline Using CircleCI, Ansible and Cloudformation</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Mon, 27 Jun 2022 01:30:26 +0000</pubDate>
      <link>https://dev.to/techcrux/configuring-an-end-to-end-cicd-pipeline-using-circleci-ansible-and-cloudformation-102a</link>
      <guid>https://dev.to/techcrux/configuring-an-end-to-end-cicd-pipeline-using-circleci-ansible-and-cloudformation-102a</guid>
      <description>&lt;p&gt;I recently completed a project in the &lt;a href="https://www.udacity.com/course/cloud-dev-ops-nanodegree--nd9991" rel="noopener noreferrer"&gt;udacity cloud DevOps Nanodegree program &lt;/a&gt;, during which I implemented CI/CD for the UdaPeople product, a revolutionary concept in Human Resources which promises to help small businesses care better for their most valuable resource: their people. During the course of the project, I demostrated mastery of the following objectives.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Utilizing Deployment Strategies such as CircleCI and a version control system e.g Github, to design and build CI/CD pipelines that support Continuous Delivery processes.&lt;/li&gt;
&lt;li&gt;Utilizing a configuration management tool such as Ansible to accomplish deployment to cloud-based servers.&lt;/li&gt;
&lt;li&gt;Surface critical server errors for diagnosis using centralized structured logging and monitoring such as prometheus.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Beauty of CI/CD
&lt;/h2&gt;

&lt;p&gt;In order to understand the beauty of CI/CD, let's define each term.&lt;br&gt;
Continuous Integration is a development practice that requires developers to integrate code into a mainline as frequently as possible, at least once a day. An automated build that compiles the code then verifies each check-in and runs the set of automated tests against it so that teams can quickly detect problems.&lt;br&gt;
Continuous Delivery is the process of getting changes of all types, including configuration changes, bug fixes, experiments and new features into production or into the hands of users in a sustainable way. When a team of developers implements continuous delivery, the mainline is in a deployable state and anyone can deploy it to the production anytime with the click of a button. When the button is clicked, an automated pipeline gets triggered. The significant element to achieve continuous delivery is automation.&lt;br&gt;
Continuous Deployment is a step up from Continuous Delivery where every change in the source code is deployed to production automatically without requiring explicit approval from a developer. A developer’s role usually ends at checking a pull request from a teammate and merging it to the master branch. Continuous Integration/Continuous Delivery takes it from there by executing all automated tests and deploying the code to production while keeping the team updated about the outcome of every event.&lt;br&gt;
Continuous Integration, Continuous Delivery and Continuous Deployment are like vectors with the same direction but different magnitude. All three terms aim to make the software development and release process more robust and quicker.&lt;/p&gt;

&lt;h2&gt;
  
  
  CircleCI, Ansible and Cloudformation
&lt;/h2&gt;

&lt;p&gt;The udapeople product relied on automation tools such as circleci, ansible and cloudformation in order to reduced the workload on it's developers and guarantee its sustainability.&lt;br&gt;
CircleCI is the continuous integration &amp;amp; delivery platform that helps the development teams to release code rapidly and automate the build, test, and deploy. After repositories on GitHub or Bitbucket are authorized and added as a project to circleci.com, every code commit triggers CircleCI to run jobs defined in it's .circleci/config.yml file. &lt;br&gt;
Ansible is an open-source automation tool, or platform, used for IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning. Ansible can automate IT environments whether they are hosted on traditional bare metal servers, virtualization platforms, or in the cloud. It can also automate the configuration of a wide range of systems and devices such as databases, storage devices, networks, firewalls, and many others.&lt;br&gt;
AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your cloud environment. It allows you to use a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. Check out this &lt;a href="https://dev.to/techcrux/deploying-a-high-availability-web-app-using-cloudformation-139g"&gt;post&lt;/a&gt; on deploying a high availability webapp using cloudformation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project steps
&lt;/h2&gt;

&lt;p&gt;1.Setup - &lt;a href="https://aws.amazon.com/free/?trk=712ee378-d73b-4293-9bad-8ce09671ea7c&amp;amp;sc_channel=ps&amp;amp;sc_campaign=acquisition&amp;amp;sc_medium=ACQ-P|PS-GO|Brand|Desktop|SU|AWS|Core|EEM|EN|Text&amp;amp;s_kwcid=AL!4422!3!444219541826!e!!g!!aws%20free%20tier&amp;amp;ef_id=CjwKCAjwh-CVBhB8EiwAjFEPGVlrY-B3YDUfkt2oFIwM457x_jaLZLun1M1EItcLvU3wVNLKFMIOLhoCOrIQAvD_BwE:G:s&amp;amp;s_kwcid=AL!4422!3!444219541826!e!!g!!aws%20free%20tier&amp;amp;all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc&amp;amp;awsf.Free%20Tier%20Types=*all&amp;amp;awsf.Free%20Tier%20Categories=*all" rel="noopener noreferrer"&gt;AWS&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create and download a new key pair in AWS EC2. Name this key pair whatever name you wish ( I named mine "udapeople.pem").&lt;/li&gt;
&lt;li&gt;Add a PostgreSQL database in RDS that has public accessibility.  &lt;a href="https://aws.amazon.com/getting-started/hands-on/create-connect-postgresql-db/" rel="noopener noreferrer"&gt;This tutorial&lt;/a&gt; may help. As long as you marked "Public Accessibility" as "yes", you won't need to worry about VPC settings or security groups. Take note of the connection details, such as:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Endpoint (Hostname): database-1.ch4a9dhlinpw.us-east-1.rds.amazonaws.com
Instance identifier: database-1 //This is not the database name
Database name: postgres (default)
Username: postgres
Password: mypassword
Port: 5432
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Note that the AWS wizard will create a default database with name postgres. If you wish to give another name to the initial database, you can do so in the additional configuration as shown in the snapshot below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verify the connection to the new database from your local SQL client, using &lt;a href="https://aws.amazon.com/getting-started/hands-on/create-connect-postgresql-db/" rel="noopener noreferrer"&gt;this tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6uuax6tep3ojaucle7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc6uuax6tep3ojaucle7j.png" alt="creating a database" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.Set up -  &lt;a href="https://circleci.com/signup/?source-button=free/" rel="noopener noreferrer"&gt;CircleCI&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up project in CircleCI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvobfkruih3tztsvoz32v.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvobfkruih3tztsvoz32v.JPG" alt="setup project" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add the SSH key (*.pem) to CircleCI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnixnj35m7g2tbrjk67po.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnixnj35m7g2tbrjk67po.JPG" alt="adding-ssh" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add the environment variables to CircleCI by navigating to {project name} &amp;gt; Settings &amp;gt; Environment Variables.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWS_ACCESS_KEY_ID=(from IAM user with programmatic access)
AWS_SECRET_ACCESS_KEY= (from IAM user with programmatic access)
AWS_SESSION_TOKEN= (from IAM user with programmatic access)
AWS_DEFAULT_REGION=(your default region in aws)
TYPEORM_CONNECTION=postgres
TYPEORM_MIGRATIONS_DIR=./src/migrations
TYPEORM_ENTITIES=./src/modules/domain/**/*.entity.ts
TYPEORM_MIGRATIONS=./src/migrations/*.ts
TYPEORM_HOST={your postgres database hostname in RDS}
TYPEORM_PORT=5432 (or the port from RDS if it’s different)
TYPEORM_USERNAME={your postgres database username in RDS}
TYPEORM_PASSWORD={your postgres database password in RDS}
TYPEORM_DATABASE=postgres {or your postgres database name in RDS}
ENVIRONMENT=production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eza5jw7upn5h1ik5647.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3eza5jw7upn5h1ik5647.JPG" alt="Environment variables" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.When a change has been pushed to GitHub non-main branch, it will test and scan the backend and frontend of the app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgc0q2q5r0ypjplhs5kar.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgc0q2q5r0ypjplhs5kar.JPG" alt="dev-branch" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.When a change has been pushed to GitHub main branch, it will trigger the pipeline and deploy the backend and frontend of the app to aws.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgartzdq2sftctx7k6as.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgartzdq2sftctx7k6as.JPG" alt="main-branch" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.The app allows you to add new employees. The frontend URL can be obtained through S3 and CloudFront. The backend URL can be seen through EC2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnejcl22tmsody6qiax6.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnejcl22tmsody6qiax6.JPG" alt="ICF-stack" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiodzbrl9zvcd9tu4lzhk.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiodzbrl9zvcd9tu4lzhk.JPG" alt="frontend" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks103qjbkg2uztcgtfk4.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks103qjbkg2uztcgtfk4.JPG" alt="backend" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Considering the udapeople HR product as a case study, the benefits of an automated CI/CD pipeline range from practical considerations like code quality and rapid bug fixes, to ensuring you’re building the right thing for your users and improving your entire software development process.&lt;/p&gt;

&lt;p&gt;Despite the name 'DevOps' suggesting a focus on developer and operations teams, building a CI/CD pipeline provides an opportunity for collaboration across a whole range of functions. By streamlining the steps to release your product, you provide your team with more insights into how your product is used and free up individuals’ time so they can focus on innovation.&lt;/p&gt;

&lt;p&gt;Did you enjoy reading the article? Drop a comment.&lt;/p&gt;

</description>
      <category>circleci</category>
      <category>ansible</category>
      <category>cloudformation</category>
      <category>prometheus</category>
    </item>
    <item>
      <title>Why Should You Consider Using Docker as a Developer?</title>
      <dc:creator>Dickson Victor</dc:creator>
      <pubDate>Mon, 20 Jun 2022 10:47:26 +0000</pubDate>
      <link>https://dev.to/techcrux/why-should-you-consider-using-docker-as-a-developer-3mnb</link>
      <guid>https://dev.to/techcrux/why-should-you-consider-using-docker-as-a-developer-3mnb</guid>
      <description>&lt;p&gt;Hey awesome reader, in this article, I’ll be giving a basic introduction to docker, how it works, why you, as a developer, should use it for packaging your applications and its impact on the development ecosystem. Happy reading. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Docker?
&lt;/h2&gt;

&lt;p&gt;Docker is an open source containerization platform. It is a platform that makes it easier, simpler, and safer in building, deploying, and managing containerized applications. Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allows you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. You can easily share containers while you work, and be sure that everyone you share with gets the same container that works in the same way. &lt;/p&gt;

&lt;h2&gt;
  
  
  How Does Docker Work?
&lt;/h2&gt;

&lt;p&gt;Docker packages an application and all its dependencies in a virtual container that can run on any Linux server. &lt;br&gt;
Each container runs as a n isolated process in the usr space and takes up less space than regular VMs due to their layered architecture. So it will always work the same regardless of it’s environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjclbknahiy2scq0wa3sg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjclbknahiy2scq0wa3sg.png" alt="Docker container" width="345" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why should you use Docker?
&lt;/h2&gt;

&lt;p&gt;Let’s say you want to create an application. And that’s working fine on your machine. But in production it doesn’t work properly(Developers experience a lot). The reason could be due to Dependencies, Libraries and versions, Framework, OS Level features, Microservices that the developer’s machine has but not in the production environment. &lt;br&gt;
We need a standardized way to package the application with its dependencies and deploy it on any environment. And that’s where docker comes in. &lt;br&gt;
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. &lt;/p&gt;

&lt;h2&gt;
  
  
  Impact on the Development Ecosystem
&lt;/h2&gt;

&lt;p&gt;Traditionally, developing an application and deploying it in a production environment hasn’t been easy. Development and production environments are often configured differently, which makes it difficult to resolve problems quickly when something was broken. &lt;br&gt;
Docker is inexpensive, easy to deploy, and offers increased development mobility and flexibility. Here are some major impacts of using Docker: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As an open-source tool, Docker automates deploying applications in software containers. &lt;/li&gt;
&lt;li&gt;It allows developers to package applications with all required parts like libraries and other dependencies and deploy them as one package. &lt;/li&gt;
&lt;li&gt;Docker helps developers create a container for each application and focus more on building it without having to worry about the operating system. &lt;/li&gt;
&lt;li&gt;Developers can easily move their software from one host to another using containers to deploy software. Containers are portable and self-contained. &lt;/li&gt;
&lt;li&gt;Containers are the standard for delivering microservices in the cloud. Microservices are deployed in separate containers and use virtual networks when they need to communicate with each other. Each microservice container is isolated from the others, making debugging much easier. &lt;/li&gt;
&lt;li&gt;Containers are easy to install and run because they don’t require installing an application or runtime environment. All you need is a Docker container image running on a container platform such as Docker Engine or Kubernetes. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Docker is a popular containerization technology that makes it easy to deploy applications. In this article, we’ve looked at a basic introduction to docker, how it works, why you should use it for packaging your applications and its impact on the development ecosystem. Thanks for reading.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>webdev</category>
      <category>devops</category>
      <category>software</category>
    </item>
  </channel>
</rss>
