DEV Community

Brent G Saucedo
Brent G Saucedo

Posted on

10 Tough AWS SAA-C03 Free Practice Questions (Scenario-Based)

AWS SAA-C03 Practice Quiz: 10 Difficult Scenario-Based Exam Questions

Preparing for the AWS Certified Solutions Architect – Associate (SAA-C03)? Many practice exams focus on simple definitions, but the real exam is heavy on scenario-based questions that test your ability to integrate multiple services.

Here are 10 difficult questions focusing on complex case studies, hybrid architectures, and cost optimization.


Question 1: Hybrid Connectivity & High Availability

Scenario:
A company has a hybrid architecture with a Direct Connect connection (1 Gbps) between their on-premises data center and their VPC in us-east-1. Critical financial applications require highly available connectivity with a consistent network performance. The company wants to ensure that if the Direct Connect connection fails, traffic automatically fails over to a backup connection without compromising the bandwidth requirement or traversing the public internet.

Which solution meets these requirements MOST cost-effectively?

  • (A) Provision a secondary Direct Connect connection of 1 Gbps at the same Direct Connect location. Use BGP to handle failover.
  • (B) Configure a Site-to-Site VPN as a backup to the Direct Connect connection. Use ECMP to aggregate bandwidth.
  • (C) Provision a secondary Direct Connect connection of 1 Gbps at a different Direct Connect location. Use BGP to handle failover.
  • (D) Use the AWS Transit Gateway to aggregate multiple VPN connections to match the 1 Gbps bandwidth and serve as a backup.

Answer: (C)

Explanation:
To achieve High Availability (HA) for Direct Connect, AWS recommends using redundant connections at different locations to protect against location-specific failures (e.g., a fire or power outage at the ISP facility).

  • Option C is correct because it provides a physically redundant connection at a different location with the same dedicated bandwidth (consistent performance) and avoids the public internet.
  • Option A is less resilient because a failure at that specific Direct Connect location would sever both links.
  • Option B and D utilize VPNs, which traverse the public internet. While valid backup options for some use cases, they do not guarantee consistent network performance (jitter/latency) like a dedicated line.

Question 2: Cost Optimization for Data Archival

Scenario:
A hospital manages a medical imaging archive utilizing approximately 500 TB of data. Images are accessed frequently during the first 30 days for diagnosis. After 30 days, regulations require the data to be retained for 10 years. Data older than 30 days is rarely accessed but must be retrievable within 12 hours if an audit occurs. The solution must be as cost-effective as possible while automating the lifecycle.

Which S3 Lifecycle configuration should the Solutions Architect recommend?

  • (A) Transition objects to S3 Standard-IA after 30 days. Transition to S3 Glacier Deep Archive after 90 days.
  • (B) Transition objects to S3 One Zone-IA after 30 days. Transition to S3 Glacier Flexible Retrieval after 90 days.
  • (C) Transition objects to S3 Glacier Instant Retrieval after 30 days. Transition to S3 Glacier Deep Archive after 365 days.
  • (D) Transition objects to S3 Standard-IA after 30 days. Transition to S3 Glacier Deep Archive after 30 days (concurrently).

Answer: (A)

Explanation:

  • Option A is the most cost-effective valid strategy. Moving to S3 Standard-IA after 30 days handles the "rarely accessed" shift while keeping it instantly available if needed immediately after the window closes. Moving to S3 Glacier Deep Archive (the cheapest storage class) is appropriate for long-term retention where a 12-hour retrieval time is acceptable.
  • Option B uses S3 One Zone-IA, which risks data loss (medical data usually requires resilience) and Glacier Flexible Retrieval is more expensive than Deep Archive.
  • Option C uses Glacier Instant Retrieval, which is more expensive per GB than S3 Standard-IA for storage.

Question 3: Serverless Microservices & Authentication

Scenario:
A startup is building a mobile app using a serverless architecture. The backend consists of Amazon API Gateway triggering AWS Lambda functions. The user data is stored in Amazon DynamoDB. The startup wants to implement user sign-up and sign-in functionality and needs to ensure that the backend resources are protected so that only authenticated users can access specific API routes. They want to minimize development effort regarding security protocols.

What is the MOST operationally efficient solution?

  • (A) Create a custom Lambda Authorizer that verifies JWT tokens generated by a custom login service stored in EC2.
  • (B) Use Amazon Cognito User Pools. Configure an Amazon Cognito Authorizer in API Gateway to validate the tokens.
  • (C) Use AWS IAM Identity Center (AWS SSO) to manage external users and assign IAM roles to mobile devices.
  • (D) Store user credentials in DynamoDB. Modify the Lambda functions to query DynamoDB for credentials on every request.

Answer: (B)

Explanation:

  • Option B is the correct answer. Amazon Cognito is a managed service specifically designed for user authentication (Sign-up/Sign-in) for mobile and web apps. By using the built-in Cognito Authorizer in API Gateway, you offload the authentication logic entirely from your Lambda code, minimizing development effort and security overhead.
  • Option A requires building and maintaining custom authentication logic (undifferentiated heavy lifting).
  • Option C is generally for workforce/internal authentication, not for external public-facing mobile app users.

Question 4: Decoupling and Scaling Architectures

Scenario:
An e-commerce platform experiences traffic spikes during flash sales. Currently, the order processing system (running on EC2 instances) fails when the database gets overwhelmed by write requests. The business needs to decouple the ingestion of orders from the processing to ensure no orders are lost, even if the processing system slows down. The solution must process orders in the exact order they were received.

Which architecture should be implemented?

  • (A) Use Amazon SQS Standard Queue to buffer orders. Configure EC2 instances to poll the queue.
  • (B) Use Amazon SNS to publish orders to an HTTP endpoint on the processing instances.
  • (C) Use Amazon SQS FIFO (First-In-First-Out) Queue. Configure EC2 instances to process messages.
  • (D) Use Amazon Kinesis Data Streams to ingest orders. Use Lambda to process the stream.

Answer: (C)

Explanation:

  • Option C is correct because SQS FIFO queues allow for decoupling (buffering) while guaranteeing ordering (First-In-First-Out) and exactly-once processing. This ensures orders are processed in the order received.
  • Option A (SQS Standard) provides "best-effort" ordering and at-least-once delivery, which could result in out-of-order processing or duplicate orders.
  • Option B (SNS) is a push mechanism, not a buffer/queue, and doesn't solve the issue of the downstream system being overwhelmed.

Question 5: File Storage for HPC

Scenario:
A research lab is migrating a High Performance Computing (HPC) workload to AWS. The application runs on hundreds of EC2 instances running Linux and requires a shared file system that provides sub-millisecond latencies and high throughput (hundreds of GB/s). The data must be accessible concurrently by all instances.

Which storage service is the BEST fit?

  • (A) Amazon EFS with Max I/O performance mode.
  • (B) Amazon FSx for Lustre.
  • (C) Amazon S3 connected via Storage Gateway File Gateway.
  • (D) Amazon EBS Provisioned IOPS volumes attached to the instances.

Answer: (B)

Explanation:

  • Option B is correct. Amazon FSx for Lustre is specifically designed for High Performance Computing (HPC) workloads requiring sub-millisecond latencies and massive throughput for parallel processing. It can be linked to S3 for long-term storage but acts as a high-speed cache.
  • Option A (EFS) is a general-purpose NFS file system. While scalable, it typically doesn't match the sheer throughput performance of Lustre for HPC specific tasks.

Question 6: Secure S3 Access from VPC

Scenario:
You have an application running on EC2 instances within a private subnet. The application needs to download software patches stored in an S3 bucket in the same region. Security policies prohibit any traffic from traversing the public internet. The architecture currently has no NAT Gateway or Internet Gateway.

What should you configure to allow access?

  • (A) Configure a NAT Gateway in a public subnet and update the private subnet route table.
  • (B) Create a Gateway VPC Endpoint for S3 and update the route table of the private subnet.
  • (C) Create an Interface VPC Endpoint (PrivateLink) for S3.
  • (D) Establish a VPC Peering connection between the VPC and the S3 service VPC.

Answer: (B)

Explanation:

  • Option B is the standard, cost-effective solution. A Gateway VPC Endpoint allows instances in a private subnet to access S3 (and DynamoDB) privately without using public IPs, NAT Gateways, or the internet. It requires a route table entry.
  • Option A would work but involves traffic going through a NAT (which implies internet egress capability).
  • Option D is incorrect; you cannot peer a VPC with the underlying "S3 Service VPC".

Question 7: Database Migration & Schema Conversion

Scenario:
A company wants to migrate an on-premises Oracle database to Amazon Aurora PostgreSQL. The database contains complex stored procedures and views. The company needs a tool to assess the complexity of the migration and convert the database schema before migrating the data.

Which combination of tools should be used?

  • (A) AWS DataSync and AWS Database Migration Service (DMS).
  • (B) AWS Schema Conversion Tool (SCT) and AWS Database Migration Service (DMS).
  • (C) AWS Migration Hub and AWS Application Discovery Service.
  • (D) Native Oracle RMAN and Amazon RDS Read Replicas.

Answer: (B)

Explanation:

  • Option B is the correct workflow for heterogeneous migrations (Oracle to PostgreSQL).
    • AWS SCT (Schema Conversion Tool) is used to convert the schema (tables, views, stored procedures) from the source engine to the target engine.
    • AWS DMS is then used to migrate the actual data.

Question 8: Transit Gateway & Cross-Account Access

Scenario:
A large enterprise has 50 VPCs spread across different AWS accounts in the same Region. They want to establish full mesh connectivity between all VPCs to allow applications to communicate. The solution must be centrally managed and scalable.

What is the MOST efficient solution?

  • (A) Set up VPC Peering between all 50 VPCs.
  • (B) Create a Transit Gateway in a central Network account. Share it with other accounts using AWS RAM. Attach all VPCs to the Transit Gateway.
  • (C) Create a Shared VPC and deploy all subnets into that single VPC across accounts.
  • (D) Use a VPN CloudHub configuration with a Virtual Private Gateway in every VPC.

Answer: (B)

Explanation:

  • Option B is correct. AWS Transit Gateway is designed to connect thousands of VPCs. By using AWS Resource Access Manager (RAM), you can share the Transit Gateway across accounts, allowing a "hub-and-spoke" topology that acts like a full mesh.
  • Option A requires $N(N-1)/2$ peering connections. For 50 VPCs, that is 1,225 connections, which is unmanageable.

Question 9: RDS High Availability vs Read Scaling

Scenario:
An application uses an Amazon RDS for MySQL database. The database is experiencing high CPU usage due to a significant increase in read-heavy analytics queries. The application also requires automatic failover in case the primary database instance crashes.

Which steps should the Solutions Architect take to resolve the performance issue and meet the availability requirement?

  • (A) Enable Multi-AZ deployment. Direct the analytics queries to the standby instance.
  • (B) Create an RDS Read Replica. Configure the application to direct read traffic to the Read Replica. Enable Multi-AZ on the primary instance.
  • (C) Upgrade the instance type to a Memory Optimized instance. Enable Multi-AZ.
  • (D) Use Amazon ElastiCache to cache the analytics queries. Enable Multi-AZ.

Answer: (B)

Explanation:

  • Option B addresses both requirements distinctively.
    • Read Replicas are used to scale read traffic. You offload the analytics queries here to lower CPU on the primary.
    • Multi-AZ is used strictly for High Availability / Disaster Recovery. The standby instance in a Multi-AZ setup cannot accept traffic.
  • Option A is incorrect because you cannot read from the standby instance in a standard RDS Multi-AZ setup.

Question 10: Security - Instance Profiles vs Roles

Scenario:
An application running on an EC2 instance needs to put objects into an S3 bucket. The security team mandates that no long-term credentials (access keys/secret keys) should be stored on the instance.

How should the Solutions Architect configure access?

  • (A) Run aws configure on the EC2 instance and input an IAM User's Access Keys.
  • (B) Create an IAM Role with permissions to write to the S3 bucket. Attach the role to the EC2 instance using an Instance Profile.
  • (C) Store the IAM Access Keys in AWS Systems Manager Parameter Store and retrieve them at runtime.
  • (D) Use Amazon Cognito Identity Pools to exchange the EC2 instance metadata for temporary credentials.

Answer: (B)

Explanation:

  • Option B is the standard best practice. An IAM Role attached to an EC2 instance (via an Instance Profile) allows the instance to obtain temporary credentials automatically via the Instance Metadata Service (IMDS). No long-term keys are stored on the disk.
  • Option A stores long-term credentials on the file system (~/.aws/credentials), which violates the security mandate.

Top comments (0)