<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Camille Chang</title>
    <description>The latest articles on DEV Community by Camille Chang (@camille_chang).</description>
    <link>https://dev.to/camille_chang</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/camille_chang"/>
    <language>en</language>
    <item>
      <title>OpenLens Cannot Connect to AWS EKS Cluster: executable aws not found</title>
      <dc:creator>Camille Chang</dc:creator>
      <pubDate>Wed, 07 Jan 2026 23:24:31 +0000</pubDate>
      <link>https://dev.to/camille_chang/openlens-cannot-connect-to-aws-eks-cluster-e8i</link>
      <guid>https://dev.to/camille_chang/openlens-cannot-connect-to-aws-eks-cluster-e8i</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Attempting to use OpenLens locally to access a remote AWS EKS environment.&lt;br&gt;
AWS SSO has been configured successfully, and access via the local terminal works as expected. However, OpenLens fails to connect to the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment Setup
&lt;/h2&gt;

&lt;p&gt;AWS SSO login and verification:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws sso login --profile profileA&lt;br&gt;
aws sts get-caller-identity --profile profileA&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;EKS kubeconfig update:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws eks update-kubeconfig --name clusterA --profile profileA&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Actual Result:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Access via local terminal works:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;kubectl get nodes&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenLens fails to connect to the cluster and returns the following error:
&lt;u&gt;&lt;em&gt;Error while proxying request: getting credentials: exec: executable aws not found.
It looks like you are trying to use a client-go credential plugin that is not installed.
To learn more about this feature, consult the documentation available at:
&lt;a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins&lt;/a&gt;
Failed to get /version for clusterId=: Internal Server Error&lt;/em&gt;&lt;/u&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Fix
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;1. Verify AWS CLI installation
Run this in a terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;2. Add AWS CLI to system PATH
Check:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;which aws
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Typical locations:&lt;br&gt;
&lt;code&gt;macOS (Intel): /usr/local/bin/aws&lt;br&gt;
macOS (Apple Silicon): /opt/homebrew/bin/aws&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then make it globally available.&lt;br&gt;
macOS (Apple Silicon – most common)&lt;br&gt;
&lt;code&gt;sudo ln -s /opt/homebrew/bin/aws /usr/local/bin/aws&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Restart OpenLens completely.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3. Or Hardcode full path to AWS CLI in kubeconfig. 
Edit your kubeconfig:
&lt;code&gt;~/.kube/config&lt;/code&gt;
Change:
&lt;strong&gt;command: aws&lt;/strong&gt;
To:
&lt;strong&gt;command: /opt/homebrew/bin/aws&lt;/strong&gt;
This guarantees OpenLens can execute it.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>help</category>
      <category>kubernetes</category>
      <category>tooling</category>
    </item>
    <item>
      <title>aws dop-c02 - notes</title>
      <dc:creator>Camille Chang</dc:creator>
      <pubDate>Sun, 04 Jan 2026 11:12:59 +0000</pubDate>
      <link>https://dev.to/camille_chang/aws-dop-c02-2pl8</link>
      <guid>https://dev.to/camille_chang/aws-dop-c02-2pl8</guid>
      <description>&lt;h3&gt;
  
  
  ECR
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;ECR Cache Rule &lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/pull-through-cache-creating-rule.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonECR/latest/userguide/pull-through-cache-creating-rule.html&lt;/a&gt;
### 
Fault Injection Service (AWS FIS) experiment， These experiments stress an application by creating disruptive events so that you can observe how your application responds. 
&lt;a href="https://docs.aws.amazon.com/fis/latest/userguide/what-is.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/fis/latest/userguide/what-is.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  lightsail vs beanstalk
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;lightsail, Simple applications, small-scale deployments
&lt;a href="https://docs.aws.amazon.com/decision-guides/latest/lightsail-elastic-beanstalk-ec2/lightsail-elastic-beanstalk-ec2.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/decision-guides/latest/lightsail-elastic-beanstalk-ec2/lightsail-elastic-beanstalk-ec2.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Config
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt; conformance pack, a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a Region or across an organization in AWS Organizations.
&lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CodeDeploy
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;List of lifecycle event hooks for an Amazon ECS deployment

&lt;ul&gt;
&lt;li&gt;*&lt;em&gt;BeforeInstall AfterInstall  AfterAllowTestTraffic  BeforeAllowTraffic AfterAllowTraffic *&lt;/em&gt; &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6glx19anv9hckygjojr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6glx19anv9hckygjojr.png" alt=" " width="277" height="846"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  SQS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;SQS Dead-Letter Queue, useful for debugging your application because you can isolate unconsumed messages to determine why processing did not succeed. &lt;a href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  EC2 Image Builder
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;fully managed AWS service that helps you to automate the creation, management, and deployment of customized, secure, and up-to-date server images.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ALB vs NLB
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Both support websocket. &lt;/li&gt;
&lt;li&gt;ALB stickiness,Uses HTTP/HTTPS cookies to bind user sessions to specific targets.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>automation</category>
      <category>aws</category>
      <category>devops</category>
      <category>learning</category>
    </item>
    <item>
      <title>AWS Certified Developer - Associate (DVA-C02) Exam notes</title>
      <dc:creator>Camille Chang</dc:creator>
      <pubDate>Sun, 14 Dec 2025 02:01:59 +0000</pubDate>
      <link>https://dev.to/camille_chang/aws-certified-developer-associate-dva-c02-exam-notes-324g</link>
      <guid>https://dev.to/camille_chang/aws-certified-developer-associate-dva-c02-exam-notes-324g</guid>
      <description>&lt;p&gt;Notes preparation before the exam.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS X-Ray
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;X-Ray daemon&lt;/code&gt;is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*&lt;em&gt;Note: *&lt;/em&gt; End-of-support notice – On February 25th, 2027, AWS X-Ray will discontinue support for AWS X-Ray SDKs and daemon. &lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  S3
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;S3 Object Lambda&lt;/code&gt;,  provide different views of data to multiple applications, allows you to add custom code to process and transform data retrieved from S3 buckets during standard GET, HEAD, and LIST API requests. This enables applications to access a tailored view of the data without needing to create or maintain separate, derivative copies or manage a proxy layer. 
&lt;a href="https://aws.amazon.com/blogs/aws/introducing-amazon-s3-object-lambda-use-your-code-to-process-data-as-it-is-being-retrieved-from-s3/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/aws/introducing-amazon-s3-object-lambda-use-your-code-to-process-data-as-it-is-being-retrieved-from-s3/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  DynamoDB
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt; supports two different kinds of primary keys:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Partition key&lt;/code&gt; – A simple primary key, composed of one attribute known as the partition key.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Partition key and sort key&lt;/code&gt; – Referred to as a composite primary key, this type of key is composed of two attributes.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt; Secondary index

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Global secondary index&lt;/code&gt; – An index with a partition key and sort key that can be different from those on the table. The primary key values in global secondary indexes don't need to be unique.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Local secondary index&lt;/code&gt; – An index that has the same partition key as the table, but a different sort key.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloudfront
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Only support ACM certificate in the US East (N. Virginia) Region (us-east-1).
### Amplify&lt;/li&gt;
&lt;li&gt; Git-based workflow for hosting full-stack serverless web applications with continuous deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  LB
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;ALB, layer 7, HTTP/HTTPS

&lt;ul&gt;
&lt;li&gt;X-Forwarded-For, helps you identify the IP address of a client &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/x-forwarded-headers.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/elasticloadbalancing/latest/application/x-forwarded-headers.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;NLB, layer 4, TCP/UDP &lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Beanstalk
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Elastic Beanstalk provisions &lt;code&gt;Amazon EC2 instances&lt;/code&gt;, configures load balancing, sets up health monitoring, and dynamically scales your environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Amazon DevOps Guru
&lt;/h3&gt;

&lt;p&gt;is a machine learning service designed to detect abnormal operating patterns &lt;/p&gt;

</description>
      <category>developer</category>
      <category>cloudcomputing</category>
      <category>aws</category>
      <category>learning</category>
    </item>
    <item>
      <title>AWS CloudOps Engineer - Associate (SOA-C03)</title>
      <dc:creator>Camille Chang</dc:creator>
      <pubDate>Sun, 07 Dec 2025 19:18:31 +0000</pubDate>
      <link>https://dev.to/camille_chang/aws-cloudops-engineer-associate-soa-c03-5beh</link>
      <guid>https://dev.to/camille_chang/aws-cloudops-engineer-associate-soa-c03-5beh</guid>
      <description>&lt;p&gt;Although I've been using AWS for a while now, I still need to review a lot of the knowledge frequently.&lt;/p&gt;

&lt;p&gt;Preparation:&lt;br&gt;
1 Tried Official Practice Question Set &lt;a href="https://awscertificationpractice.benchprep.com/app/official-practice-question-set-aws-certified-cloudops-engineer-associate-soa-c03#exams/details/315463" rel="noopener noreferrer"&gt;https://awscertificationpractice.benchprep.com/app/official-practice-question-set-aws-certified-cloudops-engineer-associate-soa-c03#exams/details/315463&lt;/a&gt;, passed with 80%.&lt;/p&gt;

&lt;p&gt;2 Knowledge points I forgot：&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;gateway VPC endpoint for &lt;code&gt;Amazon S3&lt;/code&gt; and &lt;code&gt;DynamoDB&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Gateway endpoints allow EC2 instances in private subnets to access Amazon S3 without using the internet or incurring data processing charges.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Reserved concurrency&lt;/code&gt; specifies the maximum number of concurrent instances of a Lambda function that can run at the same time.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Provisioned concurrency&lt;/code&gt; specifies the number of pre-initialized execution environments that a Lambda function has. If the initialization of the code is pre-provisioned, then the Lambda function will spend less time running.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;IAM OIDC IdPs&lt;/code&gt; when you want to connect an &lt;code&gt;external&lt;/code&gt; OIDC-compatible IdP to AWS resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;-&lt;code&gt;IAM Identity Center&lt;/code&gt; to provide user access from external IdPs to -AWS applications by using the SAML protocol. You would typically use IAM Identity Center to provide **single sign-on access **for external users to AWS services. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;**CloudWatch Logs data protection **policies help identify and protect sensitive data within CloudWatch Logs. Sensitive data includes personally identifiable information (PII) or PHI. CloudWatch Logs data protection policies automatically detect sensitive data patterns. CloudWatch Logs data protection policies can invoke alerts or initiate actions when sensitive data is logged.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The SPF record&lt;/strong&gt; specifies which IP addresses are allowed to send email for the domain. To prevent spoofing and to improve email deliverability, you should add an SPF record as a TXT record.&lt;/li&gt;
&lt;li&gt;Rolling update deployment gradually replaces tasks with new versions. Rolling deployments maintain application availability by gradually updating tasks in small batches.&lt;/li&gt;
&lt;li&gt;A canary deployment shifts a small percentage of traffic for validation. A canary deployment with weighted routing requires additional infrastructure to split traffic. A weighted routing configuration leads to higher costs for this scenario.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Route53
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AAAA record-&amp;gt; IPv6&lt;/li&gt;
&lt;li&gt;A record-&amp;gt; IPv4&lt;/li&gt;
&lt;li&gt;Alias -&amp;gt; zone apex or  ALB DNS
-&amp;gt; CNAME, point a subdomain like &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; to the root domain example.com, or to map a domain to a different service provider. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  RDS
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;RDS proxy, pool and share database connections &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnorpf0smh8m7ferme8mk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnorpf0smh8m7ferme8mk.png" alt=" " width="800" height="322"&gt;&lt;/a&gt;&lt;br&gt;
Rerfer: &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy.html&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  EC2
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cluster Placement Group, Packs instances close together inside an Availability Zone. low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of high-performance computing (HPC) applications.&lt;/li&gt;
&lt;li&gt;Partition Placement Group,  Spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.&lt;/li&gt;
&lt;li&gt;Spread placement group, trictly places a small group of instances across &lt;code&gt;distinct underlying hardware&lt;/code&gt; to reduce correlated failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Storage
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;EFS, Can create &lt;code&gt;only one mount target&lt;/code&gt; per Availability Zone &lt;a href="https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html&lt;/a&gt;
-&lt;code&gt;EBS snapshot&lt;/code&gt; is an &lt;code&gt;incremental backup&lt;/code&gt;, which means that we save only the blocks on the volume that have changed since the most recent snapshot. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data.&lt;/li&gt;
&lt;li&gt; &lt;code&gt;EBS fast snapshot restore (FSR)&lt;/code&gt; enables you to create a volume from a snapshot that is fully initialized at creation. This eliminates the latency of I/O operations on a block when it is accessed for the first time. Volumes that are created using fast snapshot restore instantly deliver all of their provisioned performance.&lt;/li&gt;
&lt;li&gt; RDS Performance Insights. can visualize the database load on your Amazon RDS DB instance load and filter the load by waits, SQL statements, hosts, or users. Fhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Aurora, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PITR is the robust, long-term backup and restore solution that involves creating a new database cluster.&lt;/li&gt;
&lt;li&gt;Backtracking is a quick, in-place "undo" feature for Aurora MySQL to recover from recent, minor errors with low Recovery Time Objective (RTO). &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Storage Gateway &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gateway-stored volumes:
Store a full copy of the data locally while asynchronously backing it up to AWS.
For the backup application, it behaves like operating a local block storage device.&lt;/li&gt;
&lt;li&gt;Gateway-cached volumes:
Store most of the data in AWS, with only recently accessed data cached locally.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;ElastiCache&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memcached is lightweight and simple, good for read-heavy, non-persistent caching.&lt;/li&gt;
&lt;li&gt;Redis is feature-rich, supports high availability, persistence, and advanced data types, making it suitable for mission-critical caching and real-time applications.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  CloudFormation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;stack set&lt;/code&gt; lets you create stacks in AWS accounts across AWS Regions by using a single CloudFormation template&lt;/li&gt;
&lt;li&gt;Custom resources,  provisioning requirements involve complex logic or workflows that can't be expressed with CloudFormation's built-in resource types. &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  EC2
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;stop&lt;/code&gt; an instance, it shuts down.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;start&lt;/code&gt; an instance, it is typically migrated to a new underlying host computer and assigned a new public IPv4 address. &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;reboot&lt;/code&gt;, An instance reboot is equivalent to an operating system reboot. In most cases, it takes only a few minutes to reboot your instance. When you reboot an instance, it keeps the following:

&lt;ul&gt;
&lt;li&gt;Public DNS name (IPv4)&lt;/li&gt;
&lt;li&gt;Private IPv4 address&lt;/li&gt;
&lt;li&gt;Public IPv4 address&lt;/li&gt;
&lt;li&gt;IPv6 address (if applicable)&lt;/li&gt;
&lt;li&gt;Any data on its instance store volumes&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;Terminate&lt;/code&gt;, After you terminate an instance, you can no longer connect to it, and it can't be recovered. All attached Amazon EBS volumes that are configured to be deleted on termination are also permanently deleted and can't be recovered. &lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Network
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Customer gateway,
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rse9yp6qpfh0p2c4j62.png" alt=" " width="661" height="271"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Route 53 Resolver,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inbound Resolver, endpoints allow DNS queries on-premises network or another VPC-&amp;gt; your VPC.&lt;/li&gt;
&lt;li&gt;Outbound Resolver, DNS queries your VPC -&amp;gt; your on-premises network or another VPC.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Route 53 routing policy&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Geolocation, route traffic based on the location of your users&lt;/li&gt;
&lt;li&gt;Geoproximity, route traffic based on the location of your resources&lt;/li&gt;
&lt;li&gt;Latency-based&lt;/li&gt;
&lt;li&gt;Multivalue answer,&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  CloudWatch
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Collect process metrics with the procstat plugin&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CloudWatch Synthetics canary&lt;/code&gt;, create canaries, configurable scripts that run on a schedule, to monitor your endpoints and APIs. &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries.html&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tag Editor
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;user-defined cost allocation tags, you must activate them. &lt;a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/activating-tags.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/activating-tags.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Trusted Advisor inspects your AWS environment, and then makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps.

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;support plan&lt;/code&gt; will  affect the quantity of available Trusted Advisor checks&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Others
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;AWS Service Catalog sharing, When you share a portfolio using account-to-account sharing or Organizations, you are sharing a reference of that portfolio. The products and constraints in the imported portfolio stay in sync with changes that you make to the shared portfolio, the original portfolio that you shared.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The recipient cannot change the products or constraints, but can add IAM access for end users. &lt;a href="https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_portfolios_sharing_how-to-share.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_portfolios_sharing_how-to-share.html&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AWS Personal Health Dashboard, a personalized view of AWS service events that may affect your AWS resources&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;OpsWorks supports Chef and Puppet&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AWS Control Tower, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated landing zone setup: Quickly sets up a well-architected, multi-account environment with features like dedicated log archive and audit accounts.&lt;/li&gt;
&lt;li&gt;Pre-configured controls: Provides a library of pre-packaged governance rules (guardrails) to enforce security, compliance, and operational policies. These can be preventive, detective, or proactive.&lt;/li&gt;
&lt;li&gt;Account Factory: Enables the provisioning of new AWS accounts that automatically comply with the established governance policies.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Attended and passed the exam on 7 Dec. There was one AI-related question.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Security certificate SCS-C03 updates</title>
      <dc:creator>Camille Chang</dc:creator>
      <pubDate>Mon, 20 Oct 2025 23:23:22 +0000</pubDate>
      <link>https://dev.to/camille_chang/aws-security-certificate-scs-c03-updates-32e0</link>
      <guid>https://dev.to/camille_chang/aws-security-certificate-scs-c03-updates-32e0</guid>
      <description>&lt;p&gt;I attended the AWS Security exam on 18 October 2025 and received my result a little later than usual. Then I saw the news that the AWS Security exam has been updated to SCS-C03, and registration for the updated exam (SCS-C03) opens on November 18.&lt;/p&gt;

&lt;p&gt;Cannot find the AWS Certified Security - Specialty (SCS-C03) Exam Guide yet, but according to AWS, the exam now covers emerging technologies, with a dedicated emphasis on generative AI and machine learning security. To better support security professionals, the exam domains have been restructured, introducing separate sections for Detection and Incident Response capabilities.&lt;/p&gt;

&lt;p&gt;More details please check &lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/training-and-certification/big-news-aws-expands-ai-certification-portfolio-and-updates-security-certification/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/training-and-certification/big-news-aws-expands-ai-certification-portfolio-and-updates-security-certification/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>news</category>
      <category>security</category>
      <category>aws</category>
      <category>ai</category>
    </item>
    <item>
      <title>Complete Beginner's Guide: Upload Files to AWS S3 with GitHub Actions</title>
      <dc:creator>Camille Chang</dc:creator>
      <pubDate>Wed, 08 Oct 2025 03:57:07 +0000</pubDate>
      <link>https://dev.to/camille_chang/complete-beginners-guide-upload-files-to-aws-s3-with-github-actions-357b</link>
      <guid>https://dev.to/camille_chang/complete-beginners-guide-upload-files-to-aws-s3-with-github-actions-357b</guid>
      <description>&lt;p&gt;If you're new to GitHub Actions and AWS, this guide will walk you through automating file uploads to an S3 bucket step by step. I'll share the common mistake I made and how to fix it, so you can avoid the same pitfalls!&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 What We're Building
&lt;/h2&gt;

&lt;p&gt;By the end of this tutorial, you'll have a GitHub Action that automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connects securely to your AWS account&lt;/li&gt;
&lt;li&gt;Uploads files to your S3 bucket whenever you push code&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📋 Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we start, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GitHub repository&lt;/li&gt;
&lt;li&gt;An AWS account&lt;/li&gt;
&lt;li&gt;Basic familiarity with GitHub (knowing how to create files and commit changes)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔧 Step 1: Set Up Your S3 Bucket
&lt;/h2&gt;

&lt;p&gt;First, let's create an S3 bucket where your files will be stored:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log into the AWS Console&lt;/li&gt;
&lt;li&gt;Navigate to S3&lt;/li&gt;
&lt;li&gt;Click "Create bucket"&lt;/li&gt;
&lt;li&gt;Give it a unique name (like &lt;code&gt;my-project-files-bucket&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Keep the default settings and create the bucket&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🔐 Step 2: Create an IAM Role (The Tricky Part!)
&lt;/h2&gt;

&lt;p&gt;This is where I initially got stuck, so let's break it down:&lt;/p&gt;

&lt;h3&gt;
  
  
  What's an IAM Role?
&lt;/h3&gt;

&lt;p&gt;Think of an IAM role as a set of permissions that GitHub Actions can "borrow" to access your AWS resources. It's like giving GitHub a temporary key to your AWS account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the Role
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Go to IAM in AWS Console&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Click "Roles" → "Create role"&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Choose "Web identity" as the trusted entity type&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;For Identity provider, select "OpenID Connect"&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add this provider URL:&lt;/strong&gt; &lt;code&gt;token.actions.githubusercontent.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For Audience, enter:&lt;/strong&gt; &lt;code&gt;sts.amazonaws.com&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Adding Permissions
&lt;/h3&gt;

&lt;p&gt;Your role needs permission to upload files to S3. Attach this policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"s3:PutObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"s3:PutObjectAcl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"s3:GetObject"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::your-bucket-name/*"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Replace &lt;code&gt;your-bucket-name&lt;/code&gt; with your actual S3 bucket name!&lt;/p&gt;

&lt;h2&gt;
  
  
  ❌ The Problem I Ran Into
&lt;/h2&gt;

&lt;p&gt;When I first tried this, I got this error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Run aws-actions/configure-aws-credentials@v4
Configuring proxy handler for STS client
Error: Credentials could not be loaded, please check your action inputs: Could not load credentials from any providers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The issue? I forgot the most important part - the &lt;strong&gt;trust policy&lt;/strong&gt;!&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Step 3: Fix the Trust Policy (The Missing Piece!)
&lt;/h2&gt;

&lt;p&gt;Here's what I was missing. The IAM role needs to "trust" GitHub Actions. Here's the trust policy you need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Federated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::YOUR-ACCOUNT-ID:oidc-provider/token.actions.githubusercontent.com"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sts:AssumeRoleWithWebIdentity"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="nl"&gt;"token.actions.githubusercontent.com:sub"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"repo:YOUR-GITHUB-USERNAME/YOUR-REPO-NAME:ref:refs/heads/main"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="nl"&gt;"token.actions.githubusercontent.com:aud"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sts.amazonaws.com"&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How to Apply This Trust Policy:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In your IAM role, click the "Trust relationships" tab&lt;/li&gt;
&lt;li&gt;Click "Edit trust policy"&lt;/li&gt;
&lt;li&gt;Replace the existing policy with the one above&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't forget to replace:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;YOUR-ACCOUNT-ID&lt;/code&gt; with your 12-digit AWS account ID&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;YOUR-GITHUB-USERNAME&lt;/code&gt; with your GitHub username&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;YOUR-REPO-NAME&lt;/code&gt; with your repository name&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🔍 How to Find Your AWS Account ID
&lt;/h2&gt;

&lt;p&gt;Not sure what your AWS account ID is? Here's how to find it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on your username in the top-right corner of AWS Console&lt;/li&gt;
&lt;li&gt;Your account ID is shown in the dropdown menu&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  🚀 Step 4: Create Your GitHub Action
&lt;/h2&gt;

&lt;p&gt;Now create a file in your repository at &lt;code&gt;.github/workflows/upload-to-s3.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload to S3&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;main&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;upload&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="c1"&gt;# This is crucial - it allows the action to get temporary credentials&lt;/span&gt;
    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
      &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS credentials&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v4&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;role-to-assume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::YOUR-ACCOUNT-ID:role/YOUR-ROLE-NAME&lt;/span&gt;
        &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-east-1&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Upload files to S3&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;aws s3 cp ./your-file.txt s3://your-bucket-name/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Remember to replace:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;YOUR-ACCOUNT-ID&lt;/code&gt; with your AWS account ID&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;YOUR-ROLE-NAME&lt;/code&gt; with the name of your IAM role&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;your-bucket-name&lt;/code&gt; with your S3 bucket name&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;your-file.txt&lt;/code&gt; with the file you want to upload&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🎉 Testing Your Setup
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Commit and push your workflow file to GitHub&lt;/li&gt;
&lt;li&gt;Go to the "Actions" tab in your GitHub repository&lt;/li&gt;
&lt;li&gt;You should see your workflow running&lt;/li&gt;
&lt;li&gt;Check your S3 bucket - your files should appear there!&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  📚 Want to Learn More?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect" rel="noopener noreferrer"&gt;GitHub's official documentation on OIDC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html" rel="noopener noreferrer"&gt;AWS IAM roles explained&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Some reflections after passing the AWS Certified Machine Learning Engineer - Associate (MLA-C01) exam:</title>
      <dc:creator>Camille Chang</dc:creator>
      <pubDate>Sun, 05 Oct 2025 03:14:33 +0000</pubDate>
      <link>https://dev.to/camille_chang/some-reflections-after-passing-the-aws-certified-machine-learning-engineer-associate-exam-kn4</link>
      <guid>https://dev.to/camille_chang/some-reflections-after-passing-the-aws-certified-machine-learning-engineer-associate-exam-kn4</guid>
      <description>&lt;ol&gt;
&lt;li&gt;The exam itself is a bit easier than the &lt;strong&gt;MLS&lt;/strong&gt; one — some questions can be answered at a glance.
&lt;/li&gt;
&lt;li&gt;Around &lt;strong&gt;60%–80%&lt;/strong&gt; of the questions focus on &lt;strong&gt;SageMaker&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Below are my summarised notes from this attempt — I hope they can help anyone preparing for the exam in the future.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  🧠 SageMaker Overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Data Wrangler&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provides a user-friendly interface to clean, preprocess, and transform data without needing to write custom code.&lt;/li&gt;
&lt;li&gt;Includes built-in transformations to balance data, such as &lt;strong&gt;Random Oversampler/Undersampler&lt;/strong&gt; and &lt;strong&gt;SMOTE (Synthetic Minority Over-sampling Technique)&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Autopilot&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Automates the process of building and deploying machine learning models.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fknxiuqplsi23r4xlxpmd.png" alt="Autopilot Process" width="800" height="200"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Clarify&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Identifies potential bias during data preparation and explains predictions without needing custom code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Debugger&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provides tools to register hooks and callbacks to extract model output tensors.
&lt;/li&gt;
&lt;li&gt;Offers built-in rules to detect model convergence issues such as &lt;strong&gt;overfitting&lt;/strong&gt;, &lt;strong&gt;underutilized GPU&lt;/strong&gt;, and &lt;strong&gt;vanishing/exploding gradients&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Feature Attribution Drift&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use the &lt;strong&gt;ModelExplainabilityMonitor&lt;/strong&gt; class to generate a feature attribution baseline and deploy a monitoring mechanism that evaluates whether feature attribution drift has occurred.
&lt;/li&gt;
&lt;li&gt;Then deploy the baseline to &lt;strong&gt;SageMaker Model Monitor&lt;/strong&gt;.
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html" rel="noopener noreferrer"&gt;Learn more →&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Shadow Testing&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Enables testing of new ML models against production models using live data &lt;strong&gt;without impacting live inference traffic&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Helps identify potential configuration errors, performance issues, and other problems before full deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Neo&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Enables machine learning models to train once and run anywhere — both in the cloud and at the edge.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;JumpStart&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A machine learning hub with prebuilt models and solutions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Ground Truth&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Provides labeling workflows for creating high-quality training datasets.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;FSx for Lustre&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Designed for large-scale ML training and HPC workloads.&lt;/li&gt;
&lt;li&gt;Can be linked directly to an &lt;strong&gt;S3 bucket&lt;/strong&gt;, caching data as needed.&lt;/li&gt;
&lt;li&gt;Requires minimal setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;ML Lineage Tracking&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Creates and stores metadata about ML workflow steps from data preparation to model deployment.&lt;/li&gt;
&lt;li&gt;Enables reproducibility, model governance, and audit tracking.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Canvas&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Allows users to import, prepare, transform, visualize, and analyze data using a visual interface.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📊 Model Monitoring in SageMaker
&lt;/h2&gt;

&lt;p&gt;SageMaker &lt;strong&gt;Model Monitor&lt;/strong&gt; provides the following types of monitoring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Quality&lt;/strong&gt; – Monitor drift in data quality.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Quality&lt;/strong&gt; – Monitor drift in model metrics such as accuracy.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bias Drift&lt;/strong&gt; – Monitor bias in model predictions.
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Attribution Drift&lt;/strong&gt; – Monitor changes in feature attribution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;SageMaker &lt;strong&gt;Endpoints&lt;/strong&gt; can enable data capture and reuse that data for retraining.&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-data-capture-endpoint.html" rel="noopener noreferrer"&gt;Data capture docs →&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bring Your Own Containers (BYOC)&lt;/strong&gt; — e.g., deploy ML models built with &lt;strong&gt;R&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor-byoc-containers.html" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/r-guide.html" rel="noopener noreferrer"&gt;R guide →&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Network Isolation&lt;/strong&gt; — blocks internet and external network access.&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/mkt-algo-model-internet-free.html" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asynchronous Inference&lt;/strong&gt; — suitable for large payloads (up to 1 GB) and long processing times (up to 1 hour).&lt;br&gt;&lt;br&gt;
Auto-scales to zero when idle, reducing costs.&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/async-inference.html" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Batch Transform&lt;/strong&gt; — perform inference without persistent endpoints.&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Inference&lt;/strong&gt; — supports payloads up to &lt;strong&gt;5 MB&lt;/strong&gt; for synchronous requests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ⚖️ Model Explainability &amp;amp; Bias Detection
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Difference in Proportions of Labels (DPL)&lt;/strong&gt; — detects pre-training bias to prevent discriminatory models.&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-data-bias-metric-true-label-imbalance.html" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Partial Dependence Plots (PDPs)&lt;/strong&gt; — illustrate how predictions change with one input feature.&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-processing-job-analysis-results.html#clarify-processing-job-analysis-results-pdp" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Shapley Values&lt;/strong&gt; — determine the contribution of each feature to model predictions.&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-shapley-values.html" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧩 Other SageMaker Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;TensorBoard Integration&lt;/strong&gt; — visualize the training process and debug model performance.&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/tensorboard-on-sagemaker.html" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Store&lt;/strong&gt; — create feature groups, ingest records, and build datasets for training.&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/feature-store.html" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Managed Warm Pools&lt;/strong&gt; — retain and reuse infrastructure after training jobs to reduce latency for iterative workloads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inference Recommender&lt;/strong&gt; — automates load testing and helps select the best instance configuration for ML workloads.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔍 Other AWS Services &amp;amp; Concepts
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;OpenSearch&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Can be used as a &lt;strong&gt;Vector Database&lt;/strong&gt;.
&lt;a href="https://aws.amazon.com/opensearch-service/serverless-vector-database/" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Data Augmentation&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Generates synthetic data to improve model training and reduce overfitting.
&lt;a href="https://aws.amazon.com/what-is/data-augmentation/" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Benefits:

&lt;ul&gt;
&lt;li&gt;Enhanced model performance
&lt;/li&gt;
&lt;li&gt;Reduced data dependency
&lt;/li&gt;
&lt;li&gt;Mitigates overfitting
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;AppFlow&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fully managed integration service for secure data transfer between SaaS apps (e.g., Salesforce, SAP, Google Analytics) and AWS (e.g., S3, Redshift).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Forecast&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Handles missing values in time-series forecasting.
&lt;a href="https://docs.aws.amazon.com/forecast/latest/dg/howitworks-missing-values.html" rel="noopener noreferrer"&gt;Docs →&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Glue&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;ETL service for preparing and transforming data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;DataBrew&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Visual data preparation tool with data quality rules, cleaning, and feature engineering.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🗣️ AI/ML Application Services
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Service&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Lex&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Chatbot and call center solutions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Polly&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Text-to-speech service&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Transcribe&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Speech-to-text&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Forecast&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Time-series forecasting&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Rekognition&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Image and video analysis (object detection, facial recognition)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Comprehend&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;NLP for sentiment analysis, topic modeling, and PII redaction&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Kendra&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Intelligent enterprise search with GenAI Index for RAG and digital assistants&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bedrock&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Managed API access to LLMs like Jurassic-2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Managed Service for Apache Flink&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fully managed real-time stream processing service (supports anomaly detection with &lt;code&gt;RANDOM_CUT_FOREST&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🧩 General ML Concepts
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Embeddings&lt;/strong&gt; — high-dimensional vectors capturing semantic meaning.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG (Retrieval-Augmented Generation)&lt;/strong&gt; — enriches responses with external knowledge sources.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temperature&lt;/strong&gt; — controls randomness of generative model output (low = focused, high = creative).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Top_k&lt;/strong&gt; — limits token choices to top &lt;em&gt;k&lt;/em&gt; probabilities; higher values increase diversity.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recall&lt;/strong&gt; — focuses on minimizing false negatives.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Precision&lt;/strong&gt; — focuses on minimizing false positives.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concept Drift&lt;/strong&gt; — when data patterns change over time, degrading model accuracy.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MAE (Mean Absolute Error)&lt;/strong&gt; — measures the average magnitude of prediction errors.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning Rate&lt;/strong&gt; — controls training step size; too high overshoots, too low slows convergence.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trainium Chips&lt;/strong&gt; — AWS-built AI chips for efficient model training and inference.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📈 Performance Metrics
&lt;/h2&gt;

&lt;p&gt;Common evaluation metrics for ML models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Precision&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Recall&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Accuracy&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;F1 Score&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ROC&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AUC&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;RMSE&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MAPE&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>tutorial</category>
      <category>career</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How to Pass the AWS Certified Data Engineer – Associate (DEA-C01) Exam2025</title>
      <dc:creator>Camille Chang</dc:creator>
      <pubDate>Wed, 24 Sep 2025 10:08:55 +0000</pubDate>
      <link>https://dev.to/camille_chang/how-to-pass-the-aws-certified-data-engineer-associate-exam-e01</link>
      <guid>https://dev.to/camille_chang/how-to-pass-the-aws-certified-data-engineer-associate-exam-e01</guid>
      <description>&lt;p&gt;This year, I finally have the time to settle down, focus on studying and working, and set a few small goals for myself. One of them is to obtain all 12 AWS certifications before the end of 2026. Last week, I passed this exam, so while it’s still fresh in my mind, I want to record my thoughts and experiences.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to Pass the AWS Certified Data Engineer – Associate Exam
&lt;/h1&gt;

&lt;p&gt;If you’re preparing for the &lt;strong&gt;AWS Certified Data Engineer – Associate&lt;/strong&gt; exam, here’s a simple and effective 4-step study guide to help you stay focused and increase your chances of passing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Check the Official Exam Guide
&lt;/h2&gt;

&lt;p&gt;Start with the official exam guide. It clearly lists the AWS services and topics covered in the exam:&lt;br&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/certification/certified-data-engineer-associate/" rel="noopener noreferrer"&gt;Official Exam Guide — AWS Certified Data Engineer – Associate&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;By reviewing the guide carefully, you’ll understand the exam scope and know which services to prioritise in your study plan.&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Watch Training Videos (Optional)
&lt;/h2&gt;

&lt;p&gt;There are many training videos available, but some are too long or overly detailed. Instead of relying only on them, I recommend:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating an AWS account to get &lt;strong&gt;hands-on practice&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Watching shorter tutorials to see what the &lt;strong&gt;service UIs&lt;/strong&gt; look like.
&lt;/li&gt;
&lt;li&gt;Paying attention to &lt;strong&gt;common service combinations&lt;/strong&gt;, since these often appear in exam questions. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Recommended finish the free courses provided for AWS:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flinybtvf12tfp8d0q9g6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flinybtvf12tfp8d0q9g6.png" alt=" " width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Practice Questions
&lt;/h2&gt;

&lt;p&gt;Practice is the key to success. &lt;br&gt;
Recommended finish the free questions set provided by AWS:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnce33s0nkm1z365pb7od.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnce33s0nkm1z365pb7od.png" alt=" " width="417" height="242"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://skillbuilder.aws/learn/2JS5H1Z9KP/official-practice-question-set-aws-certified-data-engineer--associate-deac01--english/VX268Y5VBA" rel="noopener noreferrer"&gt;https://skillbuilder.aws/learn/2JS5H1Z9KP/official-practice-question-set-aws-certified-data-engineer--associate-deac01--english/VX268Y5VBA&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My suggestions:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use practice questions from &lt;strong&gt;at least two different providers&lt;/strong&gt;.
&lt;/li&gt;
&lt;li&gt;Don’t just memorize answers — read the &lt;strong&gt;explanations&lt;/strong&gt; carefully to understand the reasoning behind them.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 4: Identify Your Weak Points
&lt;/h2&gt;

&lt;p&gt;As you practice, you’ll quickly discover which areas are your weakest. Focus your study time on these services until you’re comfortable and confident with them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Core Analytics Services You Should Know
&lt;/h2&gt;

&lt;p&gt;Based on my preparation, these are the &lt;strong&gt;major services&lt;/strong&gt; you should focus on. Creating a personal cheat sheet for them is highly recommended:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Airflow&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Orchestrates complex data pipelines with support for dependency management and scheduling.&lt;/li&gt;
&lt;li&gt;Cost is relatively high (an Airflow cluster runs continuously and incurs fixed expenses), so it may not be the most cost-effective choice for simple ETL scenarios.&lt;/li&gt;
&lt;li&gt;Open-source, cross-platform orchestration framework that can be deployed both on-premises and in the cloud.&lt;/li&gt;
&lt;li&gt;With MWAA (Managed Workflows for Apache Airflow), you can host Airflow in the cloud while maintaining compatibility with on-premises Airflow DAGs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Amazon Athena&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Partition Projection&lt;/strong&gt;: Normally, Athena needs to enumerate all partitions from the AWS Glue Data Catalog before running a query. If you have thousands or millions of partitions, query planning time becomes a bottleneck. Partition Projection solves this by not storing partitions in Glue. Instead, Athena calculates partitions dynamically at query time based on rules you define.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MSCK REPAIR TABLE&lt;/strong&gt;: This command scans the underlying paths in Amazon S3, detects new folders that represent partitions, and automatically adds them to the AWS Glue Data Catalog. This ensures Athena can query the most up-to-date data without manually adding partitions.&lt;/li&gt;
&lt;li&gt;Athena vs S3 Select&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Amazon EMR&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For terabyte-scale archived data, high-throughput, distributed processing (such as Spark or Hadoop) is required.&lt;/li&gt;
&lt;li&gt;Amazon EMR is a mature and scalable choice that can process large volumes of data in parallel at once.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;AWS Glue&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Glue workflows&lt;/strong&gt; → Orchestrates a series of crawlers, jobs, and triggers into a complete ETL pipeline。&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Step Functions&lt;/strong&gt; is a fully managed service for building and orchestrating workflows across multiple AWS services. It integrates directly with AWS Glue and Amazon EMR, supports retries, error handling, wait states, and parallel execution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Glue DynamicFrames&lt;/strong&gt;,provides a higher-level abstraction for semi-structured data (JSON, Parquet) with built-in schema inference. DynamicFrames include file-grouping options:&lt;/li&gt;
&lt;li&gt;Automatically merges many small files into larger partitions&lt;/li&gt;
&lt;li&gt;Reduces job startup and I/O overhead, significantly improving throughput when converting to Parquet and loading into Redshift&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Glue DataBrew&lt;/strong&gt;, a no-code data preparation tool aimed at analysts. 250+ built-in data cleansing and transformation operations (deduplication, missing value filling, format conversion). Provides live visual previews of sample data.
-&lt;strong&gt;Glue Transforms&lt;/strong&gt;, built-in transformations for common data cleansing and ML preprocessing (e.g., FindMatches for fuzzy deduplication).
-&lt;strong&gt;Job Bookmarking&lt;/strong&gt;, automatically tracks the last processed offset to avoid reprocessing data (incremental load scenarios).
-&lt;strong&gt;Glue Studio&lt;/strong&gt;: a visual interface for designing ETL jobs. Drag-and-drop to create data flows, which automatically generates PySpark/SparkSQL code.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;AWS Lake Formation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralizes and secures S3-based data lakes, simplifies permissions, cataloging, and governance&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Firehose&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fully managed service for loading streaming data into S3, Redshift, or OpenSearch&lt;/li&gt;
&lt;li&gt;Can convert data formats, e.g., JSON → Parquet&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Streams&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time streaming data ingestion and processing&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Amazon Managed Service for Apache Flink&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managed service for running Apache Flink applications for real-time stream processing&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Amazon MSK (Managed Streaming for Apache Kafka)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fully managed Apache Kafka service for building streaming applications&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Amazon OpenSearch Service&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managed search and analytics service (formerly Amazon Elasticsearch Service)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Amazon QuickSight,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SPICE (Super-fast, Parallel, In-memory Calculation Engine)&lt;/strong&gt; is the in-memory data engine used by Amazon QuickSight.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High performance&lt;/strong&gt;: Stores data in-memory to provide fast query and visualization responses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallel processing&lt;/strong&gt;: Can handle large datasets efficiently by distributing computations across multiple nodes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalable&lt;/strong&gt;: Automatically scales to support increasing data volume and concurrent users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use case&lt;/strong&gt;: Ideal for interactive dashboards and analytics without waiting for live queries to complete.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Redshift&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Federated Query&lt;/strong&gt;, Allows direct querying of external Aurora PostgreSQL (or RDS/Postgres). This enables combining real-time transactional data with warehouse data during analysis without replicating all real-time data from the transactional database into Redshift. This approach retains real-time insights and reduces replication costs.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Query Editor v2&lt;/strong&gt;, A web-based interface that supports query scheduling.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;materialized view&lt;/strong&gt; is a precomputed, stored version of a query result.

&lt;ul&gt;
&lt;li&gt;It improves query performance because the database can read from the stored result instead of recomputing it every time.&lt;/li&gt;
&lt;li&gt;Useful for complex or frequently used aggregations.&lt;/li&gt;
&lt;li&gt;Can be refreshed periodically to keep the data up-to-date.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt; &lt;strong&gt;Redshift Spectrum&lt;/strong&gt; allows Amazon Redshift to query data directly in &lt;strong&gt;S3&lt;/strong&gt; without loading it into Redshift tables. - Works with data stored in formats like Parquet, ORC, JSON, or CSV.&lt;/li&gt;

&lt;li&gt; &lt;strong&gt;Federated Query&lt;/strong&gt; enables Redshift to query &lt;strong&gt;external databases&lt;/strong&gt; (like RDS, Aurora, or even MySQL/PostgreSQL) directly. - No need to move or copy data; queries run across Redshift and external sources.&lt;/li&gt;

&lt;li&gt;&lt;strong&gt;COPY Command&lt;/strong&gt;&lt;/li&gt;

&lt;li&gt;Efficiently loads data from S3, DynamoDB, EMR, and Kinesis into Redshift&lt;/li&gt;

&lt;li&gt;Supports parallel loading of multiple files (using manifest files)&lt;/li&gt;

&lt;li&gt;&lt;strong&gt;UNLOAD Command&lt;/strong&gt;&lt;/li&gt;

&lt;li&gt;Exports Redshift tables to S3&lt;/li&gt;

&lt;li&gt;Can be used as part of a data lake or as an intermediate step in ETL&lt;/li&gt;

&lt;li&gt;&lt;strong&gt;AWS Data Integration&lt;/strong&gt;&lt;/li&gt;

&lt;li&gt;Glue ETL jobs can write directly into Redshift&lt;/li&gt;

&lt;li&gt;DMS (Database Migration Service) can synchronize data from RDS or on-premises databases into Redshift&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;TRUNCATE&lt;/strong&gt; in Redshift&lt;/li&gt;

&lt;li&gt;A DDL operation that is more efficient than DELETE&lt;/li&gt;

&lt;li&gt;Immediately reclaims storage space&lt;/li&gt;

&lt;li&gt;Recommended for clearing tables or materialized views while recovering storage&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon DataZone&lt;/strong&gt; provides a fully managed data catalog and governance service with built-in support for business glossaries, custom metadata forms, and a user-friendly data portal. You can onboard your existing data assets, define glossaries, and capture business metadata without building or maintaining custom tables, APIs, or applications, minimizing operational overhead.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Passing this exam isn’t about memorizing every single detail — it’s about understanding how AWS data services work together to build real-world solutions.  &lt;/p&gt;

&lt;p&gt;Good luck 🚀  &lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
      <category>dataengineering</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How I passed AWS Certified Machine Learning — Specialty 2025</title>
      <dc:creator>Camille Chang</dc:creator>
      <pubDate>Sun, 14 Sep 2025 10:11:57 +0000</pubDate>
      <link>https://dev.to/camille_chang/how-i-passed-aws-certified-machine-learning-specialty-2025-20f8</link>
      <guid>https://dev.to/camille_chang/how-i-passed-aws-certified-machine-learning-specialty-2025-20f8</guid>
      <description>&lt;p&gt;I passed the exam in April this year, and I’d like to share my personal experience and some tips for beginners. My background is not in data science, and I had only used some basic AWS services before starting my preparation. If you are new to AWS or don’t have a technical background, don’t worry — with the right approach, you can still pass the exam.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Give Yourself Enough Preparation Time&lt;/strong&gt;&lt;br&gt;
I recommend setting aside at least one month to prepare. Start by visiting the official AWS exam guide &lt;a href="https://aws.amazon.com/certification/certified-machine-learning-specialty" rel="noopener noreferrer"&gt;https://aws.amazon.com/certification/certified-machine-learning-specialty&lt;/a&gt; to understand which services and topics are covered. This helps you focus your study on the areas that are most important for the exam.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Take an Online Course&lt;/strong&gt;&lt;br&gt;
Choose one structured online course from any learning platform (Udemy, Coursera, A Cloud Guru, etc.). Most of the courses are quite dry, so don’t expect them to be very exciting, but they will give you the necessary foundation.&lt;/p&gt;

&lt;p&gt;While following the course, take notes. If possible, create one your own AWS account and explore the services. Familiarising yourself with the AWS Management Console UI helps you remember what each service looks like.&lt;/p&gt;

&lt;p&gt;For the confused part, watch YouTube, there are a lot of hands-on demos for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Practice with Questions&lt;/strong&gt;&lt;br&gt;
After finishing the course, start doing practice exams. These will show you your weak points. Whenever you found that a service or concept you don’t understand, search for a short, clear video on YouTube. Visual explanations can often make things much easier to grasp than just reading documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Study Routine&lt;/strong&gt;&lt;br&gt;
My personal schedule was about one hour or two hours of study per day. This consistent pace worked well without overwhelming me. If you stay disciplined, one month of focused study is enough for many beginners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Additional Tips&lt;/strong&gt;&lt;br&gt;
Don’t try to memorize everything — focus on understanding the use cases of each AWS service.&lt;br&gt;
Learn how different services integrate (e.g., S3 with Lambda, Glue with Athena, etc.).&lt;br&gt;
Use free-tier resources in your AWS account to test things hands-on. Doing is always better than just reading.&lt;br&gt;
Right before the exam, review the services that come up most often in practice questions.&lt;br&gt;
&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
You don’t need to be a data scientist or an AWS expert to pass this exam. With a clear study plan, the right resources, and steady practice, you can achieve it too.&lt;br&gt;
My biggest advice is: stay consistent, use practice questions, and don’t hesitate to look for simpler explanations when something feels confusing.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>From Certification to Real-World AWS: Troubleshooting</title>
      <dc:creator>Camille Chang</dc:creator>
      <pubDate>Thu, 11 Sep 2025 01:45:06 +0000</pubDate>
      <link>https://dev.to/camille_chang/from-certification-to-real-world-aws-troubleshooting-1e09</link>
      <guid>https://dev.to/camille_chang/from-certification-to-real-world-aws-troubleshooting-1e09</guid>
      <description>&lt;p&gt;When I started working in IT, I thought AWS certifications would give me the confidence to handle real-world challenges. I passed several exams, but in practice, I realised certifications only provide a broad understanding of AWS services. The real lessons come when you’re stuck debugging production issues — that’s when the learning truly sticks.&lt;/p&gt;

&lt;p&gt;Recently, I encountered a particularly challenging problem at work. In repo A, I wrote a feature in AWS SAM (YAML) using Lambda, SNS, EventBridge, and S3. Unexpectedly, my colleague said that our new feature needed to be in repo B instead. Repo B was written in Terraform, but it didn’t yet have reusable modules for Lambda — and I had zero Terraform experience. What I thought would take a single day to deploy and test ended up taking me several days, even with the help of AI. Finally, my PR was approved, the code was merged and deployed… and then IAM role issues appeared: insufficient permissions.&lt;/p&gt;

&lt;p&gt;The role’s definition was in another repo, C, built with CloudFormation and CodeBuild. Many AWS policies couldn’t be reused because they used overly broad Resource * permissions. We needed fine-grained policies specifying the exact actions and resources. At first, I tried to add many actions at once into the role, but CodeBuild kept failing. I switched to adding them incrementally — compile, deploy repo B, test permissions, repeat. This wasted an entire day.&lt;/p&gt;

&lt;p&gt;In the evening, a colleague joined me and pointed out the real problem: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Cannot exceed quota for policy size: 6144. ServiceLimiteExceeded.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At that moment, everything clicked. I created a new IAM policy, attached it to the role, and the issue was resolved.&lt;/p&gt;

&lt;p&gt;Looking back, I realized I was only seeing part of the problem. Each CodeBuild run would generate logs in S3, and I only looked at those logs, which simply said “role failed to update.” I hadn’t gone deeper into the corresponding CloudFormation stack to investigate the exact reason. If I had checked that right away, I could have avoided wasting so much time adding permissions piece by piece.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lessons Learned&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check root causes early – Logs in S3 only said “role failed to update.” The detailed error was in CloudFormation. I should have traced it sooner.&lt;/li&gt;
&lt;li&gt;Don’t overload IAM policies – AWS IAM has a strict 6,144-character policy size limit. Split large policies into smaller ones.&lt;/li&gt;
&lt;li&gt;Hands-on beats theory – Certifications gave me a foundation, but real troubleshooting taught me far more.&lt;/li&gt;
&lt;li&gt;Ask for help – Sometimes a colleague’s perspective saves hours (or days).&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
