DEV Community

Data Tech Bridge
Data Tech Bridge

Posted on

Amazon EMR (Elastic MapReduce) Cheat Sheet

AWS EMR (Elastic MapReduce) Cheat Sheet for AWS Certified Data Engineer - Associate (DEA-C01)

Core Concepts and Building Blocks

Amazon EMR (Elastic MapReduce) is a cloud-based big data platform for processing and analyzing vast amounts of data using open-source tools such as Apache Hadoop, Apache Spark, HBase, Presto, and others.

Key Components:

  • Clusters: Collection of EC2 instances running open-source applications
  • Nodes: Individual EC2 instances in a cluster (Master, Core, Task)
  • Steps: Work units submitted to a cluster
  • EMRFS: EMR File System for accessing S3 data directly
  • Instance Groups/Fleets: Ways to manage cluster capacity

EMR Architecture Mind Map

                                    ┌─────────────┐
                                    │  EMR Cluster │
                                    └──────┬──────┘
                 ┌──────────────────┬──────┴───────┬───────────────────┐
         ┌───────▼───────┐  ┌───────▼───────┐  ┌───▼───┐        ┌──────▼──────┐
         │  Master Node  │  │   Core Node   │  │Task Node│        │ EMR Services│
         └───────┬───────┘  └───────┬───────┘  └───┬───┘        └──────┬──────┘
                 │                  │              │                   │
     ┌───────────▼───────────┐      │              │          ┌────────▼────────┐
     │Resource Management    │      │              │          │EMR Notebooks    │
     │Application Management │      │              │          │EMR Studio       │
     │Cluster Management     │      │              │          │EMR Serverless   │
     └───────────────────────┘      │              │          │EMR on EKS       │
                                    │              │          └─────────────────┘
                            ┌───────▼───────┐      │
                            │HDFS Storage   │      │
                            │Data Processing│      │
                            └───────────────┘      │
                                                   │
                                           ┌───────▼───────┐
                                           │Additional     │
                                           │Processing     │
                                           │Capacity       │
                                           └───────────────┘
Enter fullscreen mode Exit fullscreen mode

EMR Features and Details

Feature Description
Deployment Options EMR on EC2, EMR Serverless, EMR on EKS, EMR on Outposts
Release Types Amazon EMR releases (e.g., 6.9.0) with different open-source applications
Storage Options HDFS, EMRFS (S3), Local File System, EBS volumes
Security IAM roles, Security Groups, Encryption (at-rest, in-transit), Kerberos
Scaling Manual scaling, Automatic scaling, Managed scaling
Purchasing Options On-Demand, Reserved, Spot instances
Networking Public subnet, Private subnet with NAT gateway
Monitoring CloudWatch, Ganglia, Hadoop Web Interfaces
Integration AWS Glue Data Catalog, Lake Formation, Step Functions
Cost Management Instance fleets, Spot instances, Auto-termination

Node Types Comparison

Node Type Purpose Minimum Required Scaling Data Loss Risk
Master Node Manages cluster, runs primary components 1 (always) Cannot scale High (single point of failure)
Core Node Runs tasks and stores data on HDFS 1+ recommended Can scale up/down (carefully) High (HDFS data loss)
Task Node Runs tasks only, no HDFS 0+ (optional) Can scale freely None (no data storage)

Instance Group vs Instance Fleet

Feature Instance Groups Instance Fleets
Instance Types Same type per group Multiple types per fleet
Purchasing Options On-Demand or Spot per group Mix of On-Demand and Spot
Capacity Strategy Manual allocation Target capacity in units
Availability Zones Single AZ per group Multiple AZs supported
Spot Behavior Limited flexibility Advanced allocation strategies
Complexity Simpler to configure More complex but flexible

Important EMR Pointers

  1. EMR clusters can be launched in minutes and scaled to process petabytes of data.
  2. Master node runs the primary components like ResourceManager, NameNode, and application clients.
  3. Core nodes both store data (HDFS) and run tasks; removing core nodes risks data loss.
  4. Task nodes only run tasks and don't store HDFS data; they can be added/removed freely.
  5. EMR supports multiple open-source frameworks including Hadoop, Spark, HBase, Presto, and Hive.
  6. EMR releases are versioned (e.g., EMR 6.9.0) and include specific versions of open-source applications.
  7. EMRFS allows EMR clusters to directly use S3 as a data layer without loading into HDFS first.
  8. S3DistCp utility helps efficiently move large amounts of data between HDFS and S3.
  9. EMR clusters can be long-running or transient (auto-terminating after tasks complete).
  10. Steps are work units that contain instructions for data processing (e.g., a Spark job).
  11. EMR Studio provides managed Jupyter notebooks for interactive data analysis.
  12. EMR Serverless eliminates the need to configure clusters for Spark and Hive applications.
  13. EMR on EKS allows running big data frameworks on Amazon EKS clusters.
  14. Instance fleets provide more flexibility than instance groups for mixed instance types.
  15. Spot instances can reduce costs by up to 90% but may be terminated with short notice.
  16. Managed scaling automatically adjusts cluster capacity based on workload.
  17. Maximum limit of 500 instances per cluster (can be increased via support ticket).
  18. Bootstrap actions run custom scripts during cluster setup before applications start.
  19. Security configurations provide encryption options for data at rest and in transit.
  20. EMR supports VPC private subnets with NAT gateway for enhanced security.

EMR Storage Options

  1. HDFS provides distributed storage across cluster nodes with data replication.
  2. EMRFS extends Hadoop to directly access S3 data with consistency checking.
  3. Local file system on EC2 instances provides temporary storage that's lost when instances terminate.
  4. EBS volumes attached to EC2 instances provide additional storage capacity.
  5. S3 is recommended for persistent storage as it survives cluster termination.
  6. HDFS replication factor defaults to 3 but can be configured based on needs.
  7. S3 data access from EMR can use EMRFS (s3://) or s3a:// protocol.
  8. EMR File System (EMRFS) provides S3 consistency view to track S3 object versions.
  9. Local disk encryption can be enabled for HDFS and local file system.
  10. EMR supports SSE-S3, SSE-KMS, and CSE-KMS for S3 data encryption.

Performance Optimization

  1. Use instance types optimized for your workload (compute, memory, or I/O intensive).
  2. Prefer r5d.2xlarge or similar instances with local SSDs for better performance.
  3. Configure appropriate EBS volumes for I/O-intensive workloads.
  4. Use EMRFS S3-optimized committer for better Spark write performance to S3.
  5. Enable EMRFS consistent view only when needed as it adds overhead.
  6. Use appropriate compression codecs (Snappy for processing, Gzip for storage).
  7. Partition data properly to enable partition pruning during queries.
  8. Use columnar formats like Parquet or ORC for analytical workloads.
  9. Configure Spark executor memory and cores based on instance size.
  10. Use dynamic allocation in Spark to efficiently utilize resources.
  11. Tune the number of partitions in Spark based on cluster size and data volume.
  12. Broadcast small tables in Spark joins to reduce shuffle operations.
  13. Use EMR instance fleets to optimize for both cost and availability.
  14. Separate storage from compute by using S3 instead of HDFS when possible.
  15. Use appropriate serialization formats (e.g., Kryo for Spark).

Cost Optimization

  1. Use Spot instances for task nodes to reduce costs by up to 90%.
  2. Set auto-termination policies for clusters that don't need to run continuously.
  3. Use instance fleets with multiple instance types to optimize for cost.
  4. Leverage Reserved Instances for master and core nodes in long-running clusters.
  5. Scale clusters based on workload using automatic scaling.
  6. Use EMR Managed Scaling to automatically adjust cluster size based on metrics.
  7. Choose appropriate instance sizes to avoid over-provisioning.
  8. Use S3 lifecycle policies to transition or expire EMR output data.
  9. Compress and partition data to reduce storage costs and improve query performance.
  10. Use EMR Notebooks instead of running a separate cluster for development.

Security Best Practices

  1. Use IAM roles to control access to EMR clusters and other AWS resources.
  2. Configure security groups to restrict network access to clusters.
  3. Enable encryption in transit using TLS for all communications.
  4. Enable encryption at rest for HDFS, EBS volumes, and S3 data.
  5. Use Kerberos or LDAP for user authentication on long-running clusters.
  6. Implement Apache Ranger for fine-grained access control.
  7. Launch clusters in private subnets with NAT gateway for enhanced security.
  8. Use block public access settings for S3 buckets used with EMR.
  9. Enable logging and auditing for EMR operations.
  10. Rotate credentials and keys regularly for long-running clusters.

Monitoring and Troubleshooting

  1. Use CloudWatch metrics to monitor cluster and application performance.
  2. Enable S3 access logging to track data access patterns.
  3. Configure EMR to send logs to S3 for persistent storage.
  4. Use Ganglia for real-time cluster monitoring.
  5. Access Hadoop web interfaces via SSH tunneling for detailed monitoring.
  6. Check for HDFS space issues when jobs fail with "No space left on device" errors.
  7. Monitor garbage collection in Spark applications for memory-related issues.
  8. Use EMR step failure action to determine cluster behavior on step failure.
  9. Check bootstrap action logs in S3 for cluster startup issues.
  10. Monitor instance state changes for Spot instance terminations.

Important CloudWatch Metrics for EMR

Metric Description Threshold Recommendation
IsIdle Indicates if cluster is idle >30 minutes may indicate underutilization
HDFSUtilization Percentage of HDFS storage used Alert at >80%
MRTotalNodes Total nodes in the cluster Monitor for unexpected changes
MRActiveNodes Number of active nodes Should match expected cluster size
MRLostNodes Number of lost nodes Alert if >0
MemoryAvailableMB Available memory Alert if consistently low
YARNMemoryAvailablePercentage Available YARN memory percentage Alert if <15%
ContainerAllocated Number of containers allocated Monitor for resource utilization
AppsCompleted Number of completed applications Track job completion rates
AppsFailed Number of failed applications Alert if >0

Open Source Components in EMR

  1. EMR uses Apache Hadoop ecosystem components with AWS optimizations for better performance.
  2. EMR's Hadoop distribution includes performance improvements for S3 integration.
  3. EMR Spark includes optimizations for S3 access and improved shuffle performance.
  4. EMR Hive includes predicate pushdown optimizations for S3 Select.
  5. EMR HBase can use S3 as a storage layer through EMRFS.
  6. EMR Presto includes AWS Glue Data Catalog integration for metastore.
  7. EMR supports Jupyter notebooks through EMR Notebooks/Studio with SparkMagic.
  8. EMR includes Hadoop ecosystem tools like Pig, Tez, Phoenix, and Zeppelin.
  9. EMR Flink provides stream processing capabilities with Kinesis integration.
  10. EMR supports custom AMIs for specialized requirements.

Data Ingestion and Processing

  1. Use Spark Streaming or Flink on EMR for real-time data processing from Kinesis.
  2. EMR can read directly from DynamoDB using EMR DynamoDB connector.
  3. Use AWS Glue Data Catalog as a metastore for Hive, Spark, and Presto.
  4. Implement throttling in Spark Streaming by adjusting batch intervals and parallelism.
  5. Kinesis ingestion throughput limit is 1MB/sec per shard when reading from EMR.
  6. S3 has a recommended 3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per prefix.
  7. Implement exponential backoff for rate limit handling in EMR applications.
  8. Use Kafka on EMR for buffering high-volume data streams before processing.
  9. EMR can process data from AWS MSK (Managed Streaming for Kafka) for stream processing.
  10. Use AWS Glue ETL jobs for lightweight transformations and EMR for complex processing.

Data Pipeline Replayability

  1. Store raw data in S3 to enable reprocessing when needed.
  2. Use checkpointing in Spark Streaming to resume processing after failures.
  3. Implement idempotent operations to safely reprocess data without duplicates.
  4. Use S3 versioning to maintain historical versions of processed data.
  5. Store processing state in DynamoDB to track progress and enable resumption.
  6. Implement dead-letter queues for messages that fail processing.
  7. Use Kinesis enhanced fan-out for high-throughput consumers from EMR.
  8. Configure appropriate Kinesis data retention (up to 365 days) for replay capability.
  9. Use EMR step concurrency to process multiple workloads simultaneously.
  10. Implement data quality checks before and after processing.

EMR Serverless

  1. EMR Serverless automatically provisions and manages compute capacity.
  2. Supports Apache Spark and Apache Hive applications without managing clusters.
  3. Pre-initialized capacity reduces job startup time for frequent workloads.
  4. Maximum concurrency limit of 100 applications per AWS account.
  5. Default worker timeout is 15 minutes of inactivity.
  6. Maximum worker capacity is 400 vCPUs per application (can be increased).
  7. Supports job runs with up to 30 days maximum duration.
  8. Provides automatic scaling based on workload requirements.
  9. Integrates with AWS Glue Data Catalog for table metadata.
  10. Supports S3 and JDBC data sources but not HDFS.

EMR on EKS

  1. Allows running Spark jobs on EKS clusters without provisioning EMR clusters.
  2. Uses Kubernetes Operators to manage Spark application lifecycle.
  3. Supports virtual clusters that map to Kubernetes namespaces.
  4. Maximum of 100 virtual clusters per AWS account.
  5. Integrates with AWS Glue Data Catalog for table metadata.
  6. Supports both job runs and interactive endpoints for development.
  7. Uses pod templates for customizing Spark executor environments.
  8. Enables sharing EKS clusters between EMR and other applications.
  9. Supports auto-scaling of Kubernetes nodes based on demand.
  10. Provides monitoring through CloudWatch Container Insights.

EMR Studio

  1. Provides web-based IDE for notebook development with EMR clusters.
  2. Supports Jupyter, JupyterLab, and Apache Zeppelin notebooks.
  3. Enables collaboration through notebook sharing and version control.
  4. Integrates with Git repositories for source control.
  5. Supports SQL Explorer for interactive SQL queries.
  6. Provides workspace isolation for different teams or projects.
  7. Enables fine-grained access control through IAM and Lake Formation.
  8. Supports connecting to multiple EMR clusters from a single workspace.
  9. Provides pre-installed Python libraries and kernels.
  10. Enables SQL, PySpark, Scala, and SparkR development.

Throughput and Latency Characteristics

  1. S3 provides virtually unlimited throughput by partitioning data across prefixes.
  2. Kinesis Data Streams provides 1MB/sec write and 2MB/sec read per shard.
  3. EMR Spark Streaming typical latency is seconds to minutes depending on batch interval.
  4. EMR Flink can achieve sub-second latency for stream processing.
  5. DynamoDB throughput is limited by provisioned capacity or on-demand limits.
  6. Kafka on EMR can handle millions of records per second with proper configuration.
  7. S3 Select can reduce query latency by up to 80% by pushing filtering to S3.
  8. EMR on EC2 provides lower latency than EMR Serverless for interactive workloads.
  9. Network throughput varies by EC2 instance type (up to 100 Gbps for some types).
  10. EMRFS read/write performance depends on S3 throughput and request rates.

Advanced EMR Features

  1. EMR WAL (Write Ahead Logging) enables HBase recovery after cluster termination.
  2. EMR supports custom AMIs for specialized software requirements.
  3. EMR supports LDAP integration for user authentication.
  4. EMR supports direct connection to on-premises data sources through Direct Connect.
  5. EMR supports running Docker containers through YARN.
  6. EMR supports GPU instances for machine learning workloads.
  7. EMR supports Spot instance fleets with allocation strategy for cost optimization.
  8. EMR supports running multiple master nodes for high availability.
  9. EMR supports custom security configurations for encryption and authentication.
  10. EMR supports running on AWS Outposts for on-premises deployments.

EMR Integration with AWS Services

  1. AWS Glue Data Catalog provides a central metadata repository for EMR applications.
  2. AWS Lake Formation provides fine-grained access control for data lakes.
  3. Amazon Redshift Spectrum can query data in S3 processed by EMR.
  4. AWS Step Functions can orchestrate EMR workflows.
  5. Amazon EventBridge can trigger EMR workflows based on events.
  6. AWS Lambda can submit EMR steps or create clusters.
  7. Amazon QuickSight can visualize data processed by EMR.
  8. AWS CloudFormation can automate EMR cluster provisioning.
  9. AWS Service Catalog can provide self-service EMR cluster templates.
  10. Amazon SageMaker can use data processed by EMR for machine learning.

Example Calculations

  1. EMR Cluster Size Calculation: For processing 1TB of data with a compression ratio of 3:1 and 3x HDFS replication: 1TB ÷ 3 × 3 = 1TB of HDFS capacity needed.
  2. Spark Memory Calculation: For a node with 32GB RAM: 32GB × 0.9 (overhead) × 0.6 (Spark default) = ~17GB available for Spark executors.
  3. Kinesis Shard Calculation: For ingesting 50MB/sec: 50MB/sec ÷ 1MB/sec per shard = 50 shards needed.
  4. Cost Calculation: 10-node EMR cluster with m5.2xlarge (8 vCPU, 32GB RAM) at $0.384/hour = $0.384 × 10 × 24 = $92.16/day.
  5. Spot Savings Calculation: With Spot instances at 70% discount: $92.16 × 0.3 = $27.65/day (saving $64.51/day).

Data Processing Patterns

  1. Use EMR for ETL processing of large datasets before loading into data warehouses.
  2. Implement lambda architecture with batch processing (EMR) and stream processing (Kinesis).
  3. Use EMR for data lake processing with S3 as the storage layer.
  4. Implement medallion architecture (bronze, silver, gold) for progressive data refinement.
  5. Use EMR for machine learning feature engineering at scale.
  6. Implement data quality validation using EMR before downstream consumption.
  7. Use EMR for regular data aggregation and reporting generation.
  8. Implement slowly changing dimension processing using EMR and Hive/Spark.
  9. Use EMR for sessionization and user behavior analysis of web logs.
  10. Implement data archiving and compliance workflows using EMR.

Limits and Quotas

  1. Maximum of 500 instances per EMR cluster (can be increased via support).
  2. Maximum of 100 steps per cluster (can be increased via support).
  3. Maximum of 256MB total size for all bootstrap action scripts.
  4. Maximum of 10,000 concurrent step executions per AWS region.
  5. Maximum of 500 active clusters per AWS region.
  6. Maximum of 100 instance groups per cluster.
  7. Maximum of 30 task instance groups per cluster.
  8. Maximum of 50 tags per EMR cluster.
  9. Maximum of 100 EMR Serverless applications per AWS account.
  10. Maximum of 100 EMR Studio workspaces per AWS account.

Best Practices for EMR Exam

  1. Understand the differences between EMR deployment options (EC2, Serverless, EKS).
  2. Know the node types (master, core, task) and their specific purposes.
  3. Understand storage options (HDFS, EMRFS, local) and when to use each.
  4. Be familiar with instance purchasing options (On-Demand, Reserved, Spot).
  5. Know how to optimize costs using instance fleets and Spot instances.
  6. Understand security configurations including encryption and authentication.
  7. Know how EMR integrates with other AWS services like Glue and Lake Formation.
  8. Understand monitoring options including CloudWatch metrics and logs.
  9. Know common troubleshooting approaches for EMR clusters and applications.
  10. Understand data processing patterns and best practices for big data workloads.

Sentry image

See why 4M developers consider Sentry, “not bad.”

Fixing code doesn’t have to be the worst part of your day. Learn how Sentry can help.

Learn more

Top comments (0)

AWS Security LIVE!

Join us for AWS Security LIVE!

Discover the future of cloud security. Tune in live for trends, tips, and solutions from AWS and AWS Partners.

Learn More