DEV Community

Armando Flores
Armando Flores

Posted on • Updated on

Splunk: AWS CloudWatch Log Ingestion - Part 1 - Introduction & Setup


The advent of the cloud has transformed centralized log management into an essential component of an organization's security program. While it is true that cloud service providers often offer native logging mechanisms with their solutions, said features may not be robust enough to satisfy the needs of certain organizations – particularly those with both on-premises and cloud environments. Similarly, entities with mature security programs may already possess a fine-tuned centralized logging platform such as Splunk or Elastic Stack.

The aim of this series is to provide meaningful insights for feeding AWS CloudWatch logs to Splunk. These articles will cover the following ingest mechanisms: the Splunk Add-On for AWS, AWS Lambda functions using the “splunk-cloudwatch-logs-processor” blueprint, and Kinesis Data Firehose. I will attempt to be as clear and detailed as practically possible. However, please note that this is neither a comprehensive nor exhaustive guide for Splunk or AWS. A degree or familiarity with each of these platforms is assumed and links to relevant resources will be included for further reading.

Initial Considerations & Testing Environment

First and foremost, I strongly advise against using a production environment for testing purposes. The entirety of my research was conducted using a VM for my Splunk single instance deployment and a “non-prod” AWS VPC. The daily indexing capacity (1GB) provided by Splunk Enterprise trial license is more than sufficient for the scope of this exercise.


  • Respect all applicable EULAs.
  • Exercise caution when creating and/or updating firewall policies. Keep in mind that certain firewalls and ACLs may not be stateful and use the defense in depth principle whenever possible.
  • Be mindful when creating new users, roles, and/or access policies. Use the least privilege principle whenever possible.

Testing Environment

  • An AWS account dedicated for testing our configurations. Please note that you will need root or near root-level privileges to the AWS account in order to crate: new IAM users, roles, and policies, EC2 instances, Lambda functions, Kinesis Firehose delivery streams, CloudWatch log groups, CloudWatch log group subscriptions, VPCs, and any other necessary components.
  • A VM for our single instance Splunk deployment using a Splunk Enterprise free trial license. Two CPU cores, 4 GB of memory, and 30 GB of storage should provide an adequate performance baseline. You may want to consider allocating more system resources to your Splunk VM if you intend to use it in future projects. In a nutshell, additional CPU cores allow for greater search concurrency, more system memory improves the performance of large and complex searches, and extra storage provides for longer retention periods.


Part 2 of this guide should be coming out soon. In the meantime, please ensure that you have completed the tasks below so that you will be able to follow along.

Top comments (0)