<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Angel Pizarro</title>
    <description>The latest articles on DEV Community by Angel Pizarro (@delagoya).</description>
    <link>https://dev.to/delagoya</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/delagoya"/>
    <language>en</language>
    <item>
      <title>Planning for deprecation of dependencies</title>
      <dc:creator>Angel Pizarro</dc:creator>
      <pubDate>Wed, 10 Apr 2024 19:55:06 +0000</pubDate>
      <link>https://dev.to/delagoya/planning-for-deprecation-of-dependencies-3a9n</link>
      <guid>https://dev.to/delagoya/planning-for-deprecation-of-dependencies-3a9n</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjw9ow1y9fftt9la3j9ex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjw9ow1y9fftt9la3j9ex.png" alt="Image description" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As time marches on, so does technology. Either you adopt to the changes as they approach, or wake up to A Bad Day at work. That's what happened to one of my customers way back in 2013 when they found they could not connect RDP to their fleet of &amp;gt;1000 Windows instances because they were not updating the images, even though the breaking change was communicated for 3 years prior. Restarting the instances with a user data Powershell script to update the instances fixed the issue, but it took a few days of development and testing before services were back online.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't have a bad day and take a look at your stack for possible deprecation!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here are three examples that you may want to pay attention to if you work on AWS:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The Amazon EC2 Instance Metadata Service will be switching to IMDSv2 for new instances this summer. You can read more about that in this &lt;a href="https://aws.amazon.com/blogs/aws/amazon-ec2-instance-metadata-service-imdsv2-by-default/"&gt;blog post&lt;/a&gt;. While you can still turn on v1, there are &lt;a href="https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/"&gt;good security reasons to migrate&lt;/a&gt;. Read the docs on how to &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-metadata-transition-to-version-2.html"&gt;transition from v1 to v2&lt;/a&gt; today!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The AWS Go SDK v1 &lt;a href="https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-aws-sdk-for-go-v1-on-july-31-2025/"&gt;will enter maintenance mode&lt;/a&gt; and be unsupported effective June 31, 2024. Time to update all those Terraform scripts! &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;While Amazon Linux 2 End-Of-Life was extended to 2025, you should really start migrating to AL2023 as soon as possible. There are lot of fundamental differences that have a high chance of disrupting your applications if you do not test them. Read more about that the &lt;a href="https://docs.aws.amazon.com/linux/al2023/ug/compare-with-al2.html"&gt;differences in the AL2 and AL2023&lt;/a&gt; in the documentation and make plans to migrate within the next 6 months. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These are just a couple of the examples I had conversations about in the last few weeks, there are more. It's worth repeating: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't have a bad day and take a look at your stack for possible deprecation!&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Why you may want to use AWS Batch on top of ECS or EKS</title>
      <dc:creator>Angel Pizarro</dc:creator>
      <pubDate>Wed, 20 Mar 2024 13:45:16 +0000</pubDate>
      <link>https://dev.to/delagoya/why-you-may-want-to-use-aws-batch-on-top-of-ecs-2c4e</link>
      <guid>https://dev.to/delagoya/why-you-may-want-to-use-aws-batch-on-top-of-ecs-2c4e</guid>
      <description>&lt;p&gt;Since AWS Batch is an overlay on top of other AWS container orchestration services (ECS and EKS), we sometimes get the question about why use it at all. &lt;/p&gt;

&lt;p&gt;Here are some reasons you may want to consider AWS Batch for handling batch-style and asynchronous workloads: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The job queue. Having a place that actually holds all of your tasks and handles API communications with ECS is actually a large value add.&lt;/li&gt;
&lt;li&gt;Fair share scheduling - in case you have mixed workloads with different priorities or SLA, a fair share job queue will allow you to specify the order of placement of jobs over time. See this &lt;a href="https://aws.amazon.com/blogs/hpc/deep-dive-on-fair-share-scheduling-in-aws-batch/"&gt;blog post&lt;/a&gt; for more information.&lt;/li&gt;
&lt;li&gt;Array jobs - a single API request for up to 10K jobs using the same definition. As mentioned Step Functions has a Map function, but underneath this would submit a single Batch job or ECS task for each map index, and you may reach API limits. The Batch array job is specifically to handle the throughput of submitting tasks to allow for exponential back off and error handling with ECS runtask.&lt;/li&gt;
&lt;li&gt;Smart scaling of EC2 resources - Batch creates an EC2 autoscale group for the instances, but it is not as simple as that. Batch managed scaling will send specific instructions to the ASG about which instances to launch based on the jobs in the queue. It also does some nice scale down as you burn down your queued jobs to pack more jobs on fewer instances at the tail end, so the resources scale down faster.&lt;/li&gt;
&lt;li&gt;Job retry - you can set different retry conditions based on the exit code of your job. For example if your job fails due to a runtime error, don't retry since you know it will fail. But if a job fails due to a Spot reclamation event, then retry the job.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The above is not a complete list, but just some highlights. The following are some things about Batch to be aware of if you are thinking of using it for your workloads:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It is tuned for jobs that are minimum 3 to 5 minutes wall-clock runtime. If your individual work items are &amp;lt; 1 minute, you should pack multiple work items into a single job to increase the run time. Example: "process these 10 files in S3".&lt;/li&gt;
&lt;li&gt;Sometimes a job at the head of the queue will block other jobs from running. There are a few reasons this may happen, such as an instance type being unavailable. Batch just added blocked job queue CloudWatch Events so you can react to different blocking conditions. See this &lt;a href="https://aws.amazon.com/blogs/hpc/introducing-new-alerts-to-help-users-detect-and-react-to-blocked-job-queues-in-aws-batch/"&gt;blog post&lt;/a&gt; for more information.&lt;/li&gt;
&lt;li&gt;Batch is not designed for realtime or interactive responses - This is related to the job runtime tuning. Batch is a unique batch system in the sense that it has both scheduler and scaling logic that work together. Other job schedulers assume either a static compute resource at the time they make a placement decision, or agents are at the ready to accept work. The implication here is that Batch does a cycle of assessing the job queue, place jobs that can be placed, scale resources for what is in the queue. The challenge here is that you don't want to over-scale. Since Batch has no insight into your jobs or how long they will take, it makes a call about what to bring up that will most cost effectively burn down the queue and then waits to see the result before making another scaling call. That wait period is key for cost optimization but it has the drawback that it is suboptimal for realtime and interactive work. Could you make it work for these use case? Maybe but Batch was not designed for this and there are better AWS services and open source projects that you should turn to first for these requirements.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Again, not a complete list, but these represent some of the common challenges I've seen for new users. &lt;/p&gt;

&lt;p&gt;Hope this list helps, and if you have any questions, leave a comment below.  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Connecting to the underlying EC2 instance that is running your AWS Batch job</title>
      <dc:creator>Angel Pizarro</dc:creator>
      <pubDate>Thu, 18 Jan 2024 15:47:06 +0000</pubDate>
      <link>https://dev.to/delagoya/connecting-to-the-underlying-ec2-instance-that-is-running-your-aws-batch-job-45on</link>
      <guid>https://dev.to/delagoya/connecting-to-the-underlying-ec2-instance-that-is-running-your-aws-batch-job-45on</guid>
      <description>&lt;p&gt;&lt;a href="https://aws.amazon.com/batch/"&gt;AWS Batch&lt;/a&gt; is a great service for submitting asynchronous and background work to. It's a managed service that adds job queue and compute scaling functionality to AWS container orchestration services - &lt;a href="https://aws.amazon.com/ecs/"&gt;ECS&lt;/a&gt; and &lt;a href="https://aws.amazon.com/eks/"&gt;EKS&lt;/a&gt; for these types of workloads. AWS Batch is optimized for jobs that are at least a few minutes long, and I've run days-long processes on it.&lt;/p&gt;

&lt;p&gt;Sometimes you need to connect to the underlying EC2 instance to debug, or inspect the outputs of running containers, but the underlying container instance ID is not directly available to you using the Batch API. For processes running on ECS I wrote a small Python script that queries for the underlying instance ID of the job using &lt;a href="https://docs.aws.amazon.com/pythonsdk/"&gt;&lt;code&gt;boto3&lt;/code&gt;&lt;/a&gt; (and a little &lt;code&gt;regex&lt;/code&gt;) and then prints out a CLI command to connect to the instance using a &lt;a href="https://aws.amazon.com/systems-manager/features/#Session_Manager"&gt;AWS Systems Manager session&lt;/a&gt;. The script takes in an AWS Batch job ID as a required parameter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;#!/usr/bin/env python 
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;regex&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;argparse&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ArgumentParser&lt;/span&gt;

&lt;span class="c1"&gt;# get a job id from the command line
&lt;/span&gt;&lt;span class="n"&gt;parser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ArgumentParser&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;parser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_argument&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;job_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;help&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The AWS Batch job ID to get the EC2 instance ID on which it ran.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse_args&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Get a Boto session 
&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Session&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="c1"&gt;# Get a client for AWS Batch 
&lt;/span&gt;&lt;span class="n"&gt;batch_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;batch&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Get a client for AWS ECS
&lt;/span&gt;&lt;span class="n"&gt;ecs_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ecs&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Describe a batch job
&lt;/span&gt;&lt;span class="n"&gt;job_description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;batch_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;describe_jobs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;job_id&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;container_instance_arn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;job_description&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;jobs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;container&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;containerInstanceArn&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# regex for pulling out the ECS cluster ID and container instance ID from a container instance ARN 
&lt;/span&gt;&lt;span class="n"&gt;regex_pattern&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:aws:ecs:(?P&amp;lt;region&amp;gt;.*):(?P&amp;lt;account_id&amp;gt;.*):container-instance/(?P&amp;lt;cluster_id&amp;gt;.*)/(?P&amp;lt;container_instance_id&amp;gt;.*)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;match&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;regex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;regex_pattern&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;container_instance_arn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;cluster_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;match&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;group&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cluster_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;container_instance_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;match&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;group&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;container_instance_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Describe a container instance and get the instance ID
&lt;/span&gt;&lt;span class="n"&gt;container_instance_description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ecs_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;describe_container_instances&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cluster_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;containerInstances&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;container_instance_id&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;ec2_instance_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;container_instance_description&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;containerInstances&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ec2InstanceId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;To connect to the EC2 instance use the AWS CLI like so:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;aws ec2 connect-to-instance --instance-id &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;ec2_instance_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need to find and connect to the underlying EC2 Instance for an AWS Batch job, I hope this script helps. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Updated Jan 22, 2024 to use a &lt;code&gt;boto3.Session&lt;/code&gt; as per advice from &lt;a href="https://ben11kehoe.medium.com/boto3-sessions-and-why-you-should-use-them-9b094eb5ca8e"&gt;this post&lt;/a&gt; which I agree with.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
  </channel>
</rss>
