DEV Community

Cover image for Event Streaming and AWS Kinesis
Supratip Banerjee for AWS Community Builders

Posted on

Event Streaming and AWS Kinesis

What is an Event?

• Currently, in all markets, everything is being driven by events
• Oftentimes, they’re tied to actions that affect how a business operates or how a user navigates through a process
• For example, when a business transaction takes place, such as a customer placing an order or a deposit being added to a bank account, that is an event that drives a next step. With customers looking for responsive experiences when they interact with companies, being able to make real-time decisions based on an event becomes critical.

What is event streaming: Event-driven architecture

Alt Text

• One of the inherent challenges with microservices is the coupling that can occur between the services. In a conventional ask, don’t tell architecture, data is gathered on demand. Service A asks Services B, C, and D, “What’s your current state?” This assumes B, C, and D are always available to respond. However, if they moved or they’re offline, they can’t respond to Service A’s query.
• To compensate, microservice architectures tend to include workarounds (such as retries) to deal with any ill effects caused by changes in deployment topology or network outages. This adds an extra layer of complexity and cost.
• Event streaming attempts to solve this problem by inverting the communication process among services. An event-driven architecture utilizes a tell, don’t ask approach, in which Services B, C, and D publish continuous streams of data as events. Service A subscribes to these event streams—processing the facts, collating the results, and caching them locally. However, Service A only needs to act, or perform its function, when it is delivered a specific type of event.
• Event stream processing is often viewed as complementary to batch processing. Batch processing is about taking action on a large set of static data (“data at rest”), while event stream processing is about taking action on a constant flow of data (“data in motion”).
• Event stream processing is necessary for situations where action needs to be taken as soon as possible. This is why event stream processing environments are often described as “real-time processing.”

How Does Event Stream Processing Work?

• Event stream processing works by handling a data set by one data point at a time. Rather than view data as a whole set, event stream processing is about dealing with a flow of continuously created data. This requires a specialized set of technologies.
• In an event stream processing environment, there are two main classes of technologies: 1) the system that stores the events, and 2) the technology that helps developers write applications that take action on the events.
• The former component pertains to data storage, and stores data based on a timestamp. For example, you might capture outside temperature every minute of the day and treat that as an event stream. Each “event” is the temperature measurement accompanied by the exact time of the measurement. This is often handled by technology such as Apache Kafka or AWS Kinesis
• The latter (known as “stream processors” or “stream processing engines”) is truly the “event stream processing” component and lets you take action on the incoming data.
• Use cases such as payment processing, fraud detection, anomaly detection, predictive maintenance, and IoT analytics all rely on immediate action on data.

AWS Kinesis

• Kinesis is managed alternative to Apache Kafka
• For Real Time Big data
• Compatible with streaming frameworks – Spark, NiFi etc.
• Data is automatically replicated to 3 AZ
• 3 Kinesis products
• Kinesis Stream- low latency streaming, how to ingest at scale
• Kinetics analytics- perform real time analytics on stream using SQL
• Kinesis Firehose- to load streams into other parts of AWS like S3, RedShift, ElasticSearch

Kinesis services

Alt Text

Kinesis Stream

• Streams are divided into ordered Shards/Partitions
• Consider shard as 1 level queue
• We have producers who produces events to Kinesis Stream, which has 3 shards in our example. Data may go to either shard, and the consumer will be receiving from either shard as well
• So to scale up we have to increase the shard, more shards more throughput
• Data retention is 1 to 7 days (default is 1 day)
• Can reprocess / replay data even after consumed. This is a big difference between SQS. So in SQS once the data is consumed its gone, but with Kinesis the data is still there
• Multiple applications can consume same stream. More like SNS mindset, we can have one stream and many applications that subscribes
• This enable us to do real time processing, and a real scale of throughput as we can add shards
• Once data is inserted in Kinesis it can not be deleted (immutability). So the data will be there 1-7 days, then we do something (delete or re-process) with the data

Shard

• One stream is made of many shards
• One shard can represent 1MB/sec or 1000 msgs/sec at write per shard (producer can write at that speed or capacity)
• Read- 2MB/sec per shard
• So for 5mb/sec throughput we need 3 shards for read and for write side 5
• Billing is per shard, can have as many shards as possible
• Ability to batch the messages
• Records will be ordered (In SQS no order [although FIFO is only 1 queue, has order], SNS has no FIFO so no order guaranteed).

Kinesis Ordering Concept

Alt Text

Kinesis Firehose
• Fully managed, serverless
• Load data into RedShift/S3/ElasticSearch/Slpunk
• Near Real time
• min 60 seconds latency)
• Min 32 mb of data loading at a time
• Supports many data formats, conversions, transformation, compression
• Pay for the amount of data going through Firehose

Alt Text

Comparison

Now if I do some comparison

SQS :

  1. Pull Data
  2. Data deleted after consume
  3. Many consumers
  4. No order (except FIFO)
  5. no need to provision throughput

SNS :

  1. Push to many subscribers
  2. Up to 10000000 subscribers
  3. Data is not persisted
  4. Pub/Sub
  5. no need to provision throughput

Kinesis :

  1. Pull data
  2. As many consumers
  3. replay data feature
  4. More for ELT, big data
  5. Ordering at shard level
  6. Must provision throughput

Based on your business requirement and architecture pattern choose the right message broker/pattern

Thank you

Top comments (0)