DEV Community

Cover image for Centralising audit, compliance and incident detection
Matt Lewis for AWS Heroes

Posted on • Updated on

Centralising audit, compliance and incident detection

Introduction

This is the third in a series of posts looking at some of the core services, building blocks and approaches that will help you build out a multi-account best practice landing zone:

In this post we will focus on some of the services that will provide security, compliance and incident detection, starting with CloudTrail.

The source code used is available in this GitHub repository

CloudTrail

AWS CloudTrail is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. For this demo, we are going to create an Organization Trail. This is a trail that logs all events for all AWS accounts in the organization. They are automatically applied to all member accounts.

In order to set up an organization trail, you will first need to enable CloudTrail as a trusted service in you organization, else you will get an error message like the following:

ERROR: Resource OrgTrail failed because Resource handler returned 
message: "Invalid request provided: The request could not be processed 
because your organization hasn't enabled CloudTrail service access. 
Enable service access for CloudTrail, and then try again.
Enter fullscreen mode Exit fullscreen mode

To enable trusted access, you can run the following command using a profile in the management account:

aws organizations enable-aws-service-access --service-principal cloudtrail.amazonaws.com
Enter fullscreen mode Exit fullscreen mode

Or you can setup directly from within AWS Organizations in the management account by clicking on services and selecting CloudTrail, and then enable trusted access.

Enable CloudTrail

Enabling trusted access automatically creates a service-linked role called AWSServiceRoleForCloudTrail role in each account. This is required for CloudTrail to successfully log events for an organization.

An organization trail can only be setup in the management account, and so we use the following OrganizationBinding that applies only to this account:

  OrganizationBindings:
    OrgTrailBinding:
      IncludeMasterAccount: true
Enter fullscreen mode Exit fullscreen mode

We also pass in a number of parameters to the template, which include specifying a name for the S3 bucket, the organization trail and the CloudWatch log group. In addition, if we have not already done so, we store the organizationId of our AWS Organization in the organization-parameters.yml file and pass this value in along with the resource-prefix.

  Parameters:
    orgBucketName: !Sub '${resourcePrefix}-orgtrail-${CurrentAccount.AccountId}'
    resourcePrefix: !Ref resourcePrefix
    organizationId: !Ref organizationId
    trailName: central-orgtrail
    logGroupName: OrgTrail/org-audit-log
Enter fullscreen mode Exit fullscreen mode

Now we define the cloudformation in the main template itself.

Create an S3 bucket in management account

First we need to create an S3 bucket that will receive the log files for the organization trail. We use the PublicAccessBlockConfiguration settings to block open public access to the bucket. We use the LifecycleConfiguration settings to only store the objects for a specific period of time. We also setup server side encryption with S3-managed keys, and enable versioning.

  OrgTrailBucket:
    OrganizationBinding: !Ref OrgTrailBinding
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    UpdateReplacePolicy: Retain
    Properties:
      BucketName: !Ref orgBucketName
      AccessControl: Private
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      LifecycleConfiguration:
        Rules:
          - ExpirationInDays: !Ref logDeletionDays
            Id: 'orgtrail-bucket-lifecycle-configuration'
            Status: Enabled
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256
      VersioningConfiguration:
        Status: Enabled
Enter fullscreen mode Exit fullscreen mode

This bucket needs a bucket policy that allows CloudTrail to put the log files in the bucket for the organization. The resource with organizationId allows logging for the organization trail. It also allows logging for the specific account itself in the event the trail is changed from an organization trail to a trail for that account only. The aws:SourceArn condition helps ensure that CloudTrail can write to the S3 bucket only for the specific trail.

  - Sid: AWSCloudTrailAclCheck
    Effect: Allow
    Principal:
      Service: cloudtrail.amazonaws.com
    Action: s3:GetBucketAcl
    Resource: !GetAtt OrgTrailBucket.Arn
  - Sid: AWSCloudTrailWrite
    Effect: Allow
    Principal:
      Service: cloudtrail.amazonaws.com
    Action: s3:PutObject
    Resource: 
      - !Sub '${OrgTrailBucket.Arn}/AWSLogs/${AWS::AccountId}/*'
      - !Sub '${OrgTrailBucket.Arn}/AWSLogs/${organizationId}/*'
    Condition:
      StringEquals:
        s3:x-amz-acl: bucket-owner-full-control
        AWS:SourceArn: !Sub 'arn:aws:cloudtrail:eu-west-2:${AWS::AccountId}:trail/${trailName}'
Enter fullscreen mode Exit fullscreen mode

We also want our organization trail to support sending the events to a CloudWatch log group. Before we set up the trail, we need to set up the CloudWatch log role and the IAM role assumed by CloudTrail to write to CloudWatch.

We start off by creating the CloudWatch log group:

  OrgTrailLogGroup:
    OrganizationBinding: !Ref OrgTrailBinding
    Type: 'AWS::Logs::LogGroup'
    Properties:
      RetentionInDays: 7
      LogGroupName: !Ref logGroupName
Enter fullscreen mode Exit fullscreen mode

We then create the IAM role that will be assumed by CloudTrail. This allows CloudTrail to create a log stream in the log group specified above, and to deliver events to that log stream for both trails in the specific AWS account and for organization trails created in this account (the management account) that are applied to the organization with the specific organizationId.

  OrgTrailLogGroupRole:
    OrganizationBinding: !Ref OrgTrailBinding
    Type: 'AWS::IAM::Role'
    Properties:
      RoleName: orgtrail-publish-to-cloudwatch-log-group
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Sid: AssumeRole1
          Effect: Allow
          Principal:
            Service: 'cloudtrail.amazonaws.com'
          Action: 'sts:AssumeRole'
      Policies:
      - PolicyName: 'cloudtrail-policy'
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Sid: AWSOrgTrailCreateLogStream
            Effect: Allow
            Action:
              - 'logs:CreateLogStream'
              - 'logs:PutLogEvents'
            Resource:
              - !Sub 'arn:aws:logs:eu-west-2:${AWS::AccountId}:log-group:${logGroupName}:log-stream:${AWS::AccountId}_CloudTrail_eu-west-2*'
              - !Sub 'arn:aws:logs:eu-west-2:${AWS::AccountId}:log-group:${logGroupName}:log-stream:${organizationId}_*'
Enter fullscreen mode Exit fullscreen mode

Finally we create the trail providing all the relevant information

  OrgTrail:
    OrganizationBinding: !Ref OrgTrailBinding
    Type: AWS::CloudTrail::Trail
    DependsOn:
      - OrgTrailBucketPolicy
      - OrgTrailLogGroup
      - OrgTrailLogGroupRole
    Properties:
      CloudWatchLogsLogGroupArn: !GetAtt 'OrgTrailLogGroup.Arn'
      CloudWatchLogsRoleArn: !GetAtt 'OrgTrailLogGroupRole.Arn'
      EnableLogFileValidation: true
      IncludeGlobalServiceEvents: true
      IsLogging: true
      IsMultiRegionTrail: true
      IsOrganizationTrail: true
      S3BucketName: !Ref OrgTrailBucket
      TrailName: !Ref trailName
Enter fullscreen mode Exit fullscreen mode

By default, trails created without specific event selectors will be configured to log all read and write management events, and no data events. Data events provide insights into the resource (“data plane”) operations performed on or within the resource itself. Data events are often high volume activities and include operations such as Amazon S3 object level APIs and Lambda function invoke API.

Adding the following EventSelector to the properties section would log data events for all objects in all S3 buckets in your account, with the trail logging both read and write events as well as management events.

...
Properties:
  EventSelectors:
    - DataResources:
        - Type: AWS::S3::Object
          Values:
            - !Sub "arn:aws:s3:::"
      IncludeManagementEvents: true
      ReadWriteType: All
Enter fullscreen mode Exit fullscreen mode

When the pipeline runs and the organization trail is created, a trail with the given name will be created in every AWS account that belongs to the organization. Users with the relevant permissions will be able to see this trail in their member accounts, and will be able to view the event history directly in CloudTrail for their account. However, they will not be able to remove or modify the trail in any way. Any attempt to do so will show an error message like the one shown in the console below:

CloudTrail Member Access Deny

By default, the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). To provide a security layer that is directly manageable, you can instead use server-side encryption with AWS KMS–managed keys (SSE-KMS) for your CloudTrail log files, but that is currently outside the scope of this blog post.

CloudWatch Alarms

In our second post, we showed how to setup a cross-account role, so that a user in the IncidentResponse group in the Security account could jump across into a production account to investigate in the case of an incident. To add an additional level of security and audit, we will setup a CloudWatch alarm to alert us whenever the elevated role has been assumed.

To start off with we define a CloudWatch alarm that will exist in the management account with the centralised CloudWatch logs from CloudTrail. We specify the name of the metric associated with the alarm. Statistics are metric data aggregations over specified periods of time. For this alarm, we are simply summing the values of all the data points over a time period of 10 seconds. We are evaluating only over 1 period. This means that if one incident occurs in a 10 second period, the alarm will be triggered.

  RoleAlarm:
    Type: AWS::CloudWatch::Alarm
    OrganizationBinding: !Ref OrgTrailBinding
    Properties:
      AlarmName: 'Security switched to Elevated Role in Prod'
      AlarmDescription: 'Alarm on usage of elevated role in the Prod account'
      MetricName: !Sub '${resourcePrefix}-switch-elevated-count'
      Namespace: OrgTrailMetrics
      Statistic: Sum
      Period: 10
      EvaluationPeriods: 1
      Threshold: 1
      TreatMissingData: notBreaching
      AlarmActions:
      - !Ref AlarmNotificationTopic
      ComparisonOperator: GreaterThanOrEqualToThreshold
Enter fullscreen mode Exit fullscreen mode

Then we define the MetricFilter. This filter searches the CloudWatch log group for any events where the event name is 'SwitchRole' and the assumed role is the 'elevated-security-role'

  ProductionSupportRoleLoginsFilter:
    Type: AWS::Logs::MetricFilter
    OrganizationBinding: !Ref OrgTrailBinding
    Properties:
      LogGroupName: !Ref logGroupName
      FilterPattern: '{($.eventName = "SwitchRole") && ($.userIdentity.arn = "arn:aws:sts::*:assumed-role/elevated-security-role/*") }'
      MetricTransformations:
      - MetricValue: '1'
        MetricNamespace: OrgTrailMetrics
        MetricName: !Sub '${resourcePrefix}-switch-elevated-count'
Enter fullscreen mode Exit fullscreen mode

Finally, we define the SNS topic where the notification will be sent if the alarm is triggered. This is setup to send an email to our root email address.

  AlarmNotificationTopic:
    Type: AWS::SNS::Topic
    OrganizationBinding: !Ref OrgTrailBinding
    Properties:
      DisplayName: !Sub 'Notifies when alarm on usage of elevated role goes off'
      TopicName: !Sub '${resourcePrefix}-switch-elevatedrole-alarm-notification'
      Subscription:
        - Endpoint: !GetAtt MasterAccount.RootEmail
          Protocol: email
Enter fullscreen mode Exit fullscreen mode

We can now log into the Security account as a user in the IncidentResponse group, and switch role in the console to the elevated security role in the Production account. This will have the result of the triggering our alarm which we can see in the CloudWatch Alarm console:

CloudWatch Alarm

And when we see that this is in alarm, we should also receive an email notifying us of the use of the elevated security role in production.

CloudTrail Alarm Email

At this point we could use CloudTrail to view all the actions that were carried out by the user, or take some other action. This gives you an idea of the capability that exists using CloudTrail. Now we will move onto AWS Config.

AWS Config

AWS Config is a service that enables you to continually assess, audit, and evaluate the configurations of your AWS resources. This includes how the resources are related to one another, and how they were configured in the past and changed over time.

To start off with, we create an S3 bucket in the Log Archive account, and attach a bucket policy to it that allows the Config service to put objects into the bucket as shown here. To enable AWS Config, we must create a configuration recorder and a delivery channel. AWS Config uses the delivery channel to deliver the configuration changes to your Amazon S3 bucket

The configuration record describes the AWS resource types we want to record configuration changes for. In the recording group, we specify that AWS Config will record configuration changes for every supported type of regional resource. It will also include all supported types of global resources, such as IAM.

  ConfigurationRecorder:
    Type: 'AWS::Config::ConfigurationRecorder'
    Properties:
      RecordingGroup:
        AllSupported: true
        IncludeGlobalResourceTypes: true
      RoleARN: !GetAtt ConfigurationRecorderRole.Arn
Enter fullscreen mode Exit fullscreen mode

The delivery channel is used to deliver configuration information to our S3 bucket (it also supports SNS). We set it up to delivery configuration snapshots every hour to the S3 bucket.

  DeliveryChannel:
    Type: 'AWS::Config::DeliveryChannel'
    Properties:
      ConfigSnapshotDeliveryProperties:
        DeliveryFrequency: One_Hour
      S3BucketName: !Ref ConfigAuditBucket
Enter fullscreen mode Exit fullscreen mode

We also configure the IAM role that Config will assume. This includes the AWSConfigRole AWS managed policy, which will ensure that Config will have the right permissions to get configuration details whenever a new AWS resource type is supported. The policy also allows Config to write the details to the S3 bucket

  ConfigurationRecorderRole:
    Type: 'AWS::IAM::Role'
    Properties:
      ManagedPolicyArns:
      - 'arn:aws:iam::aws:policy/service-role/AWSConfigRole'
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Sid: AssumeRole1
          Effect: Allow
          Principal:
            Service: 'config.amazonaws.com'
          Action: 'sts:AssumeRole'
      Policies:
      - PolicyName: 's3-policy'
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action: 's3:PutObject'
            Resource: !Sub '${ConfigAuditBucket.Arn}/*'
            Condition:
              StringLike:
                's3:x-amz-acl': 'bucket-owner-full-control'
          - Effect: Allow
            Action: 's3:GetBucketAcl'
            Resource: !GetAtt ConfigAuditBucket.Arn
Enter fullscreen mode Exit fullscreen mode

We can push the changes to deploy the pipeline, and now AWS Config will be enable and rolled out across all accounts in our organizations. You can go into any account and view the resources through a dashboard. However, there are currently no compliance checks as we have not defined any rules. So let's go and do that.

AWS Config Managed Rules

AWS Config provides AWS managed rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resources comply with common best practices. After you activate a rule, AWS Config compares your resources to the conditions of the rule. After this initial evaluation, AWS Config continues to run evaluations each time one is triggered.

We will setup compliance with an AWS managed rule to check if the incoming SSH traffic for a security group is accessible. You can find here the list of AWS Config Managed Rules. We will use the restricted-ssh rule which has the Identifier of INCOMING_SSH_DISABLED. This is setup using cloudformation in the template below. It is rolled out to all accounts.

  SSHOrganizationConfigRule:
    Type: "AWS::Config::OrganizationConfigRule"
    Properties:
      OrganizationConfigRuleName: "OrganizationRestrictedSSH"
      OrganizationManagedRuleMetadata:
        RuleIdentifier: "INCOMING_SSH_DISABLED"
        Description: "restricted-ssh"
Enter fullscreen mode Exit fullscreen mode

Having to setup lots of individual managed rules can be tedious and error prone, so AWS also provide conformance packs, which we will take a look at now.

AWS Config Conformance Pack

A conformance pack is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a Region or across an organization in AWS Organizations. Conformance packs are created by authoring a YAML template that contains the list of AWS Config managed or custom rules and remediation actions.

AWS provide a set of sample templates for conformance packs. We will use the 'Operational Best Practices for Security, Identity and Compliance Services'. The template is available in this GitHub repo

The steps to enable a conformance pack is straightforward. Firstly, we invoke a template that creates an S3 bucket in the management account.

  CompliancePackBucket:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    UpdateReplacePolicy: Retain
    Properties:
      BucketName: !Ref bucketName
      AccessControl: Private
      PublicAccessBlockConfiguration:
        BlockPublicAcls: true
        BlockPublicPolicy: true
        IgnorePublicAcls: true
        RestrictPublicBuckets: true
      BucketEncryption:
        ServerSideEncryptionConfiguration:
          - ServerSideEncryptionByDefault:
              SSEAlgorithm: AES256
Enter fullscreen mode Exit fullscreen mode

We then copy the conformance pack from GitHub to a local yml file. Next we run a task to copy this file to the S3 bucket.

CopyToS3:
  Type: copy-to-s3
  DependsOn: ConfigCompliancePackBucket
  LocalPath: ./Operational-Best-Practices-for-Security-Services.yml
  RemotePath: !Sub 's3://${resourcePrefix}-conformance-pack/security-services.yml'
  OrganizationBinding:
    IncludeMasterAccount: true
    Region: eu-west-2
Enter fullscreen mode Exit fullscreen mode

Finally, we invoke a template that uses a OrganizationConformancePack resource to deploy the template in the S3 bucket to the organization.

  OrganizationConformancePack:
    Type: AWS::Config::OrganizationConformancePack
    Properties:
        OrganizationConformancePackName: SecurityServices
        TemplateS3Uri: !Ref templateURI
Enter fullscreen mode Exit fullscreen mode

Once deployed, we have much richer information available to us on the compliance status of our resources.

AWS Config Aggregator

An aggregator is an AWS Config resource type that can collect AWS Config configuration and compliance data from an organization in AWS Organizations. This will allow us to centralise all Config findings in our Security account.

To setup an aggregator, we first have to go into AWS Organizations in the management account, click on services, select Config, and then enable trusted access.

Next we define the Security Account as the delegated administrator for Config using the following command in a terminal window:

aws organizations register-delegated-administrator --account-id <security-account-id> --service-principal config.amazonaws.com
Enter fullscreen mode Exit fullscreen mode

We then define an IAM role to be assumed by the Config service. This role must have the AWSConfigRoleForOrganizations policy attached. This role is targeted at the Security account:

  ConfigAggregatorRole:
    Type: AWS::IAM::Role
    OrganizationBinding: !Ref SecurityBinding
    Properties:
      RoleName: !Sub '${resourcePrefix}-configaggregator-role'
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service: config.amazonaws.com
            Action:
              - sts:AssumeRole
      Path: /
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AWSConfigRoleForOrganizations'
Enter fullscreen mode Exit fullscreen mode

Next we define the aggregator itself against the Security account. We use the OrganizationAggregationSource property to define that this is for an organization. In our case, we are just going to collate data for a single region.

  ConfigAggregator:
    Type: AWS::Config::ConfigurationAggregator
    OrganizationBinding: !Ref SecurityBinding
    DependsOn: ConfigAggregatorRole
    Properties:
      OrganizationAggregationSource:
        RoleArn: !GetAtt ConfigAggregatorRole.Arn
        AwsRegions:
          - !Ref aggregatorRegion
        AllAwsRegions: false
      ConfigurationAggregatorName: OrgConfigAggregator
Enter fullscreen mode Exit fullscreen mode

Once deployed, the aggregator will appear in the Security account and start collecting the config data from other accounts. Now starts the hard work of ensuring your resources are compliant.

AWS Config Aggregator

One of the rules that is marked as non-compliant is GuardDuty not being enabled, so that is what we will look at now.

Amazon GuardDuty

Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation.

We start off by configuring a new detector (an object that represents the GuardDuty service) which is required for GuardDuty to become operational. This is deployed to all accounts in the organization. The detector is setup to be enabled on creation, and will export updated findings every 15 minutes.

  Detector:
    Type: AWS::GuardDuty::Detector
    OrganizationBinding: !Ref GuardDutyAllBinding
    Properties:
      Enable: true
      FindingPublishingFrequency: FIFTEEN_MINUTES
Enter fullscreen mode Exit fullscreen mode

Then we setup the AWS::GuardDuty::Master resource in each GuardDuty member account to accept an invitation from the GuardDuty administrator account, which is designated as the Security account.

  Master:
    DependsOnAccount: !Ref SecurityAccount
    Type: AWS::GuardDuty::Master
    OrganizationBinding: !Ref GuardDutyMemberBinding
    Properties:
      DetectorId: !Ref Detector
      MasterId: !Ref SecurityAccount
Enter fullscreen mode Exit fullscreen mode

Finally, we setup the AWS::GuardDuty::Member resource to add an AWS account as a GuardDuty member account to the GuardDuty administrator account. This is only deployed to the Security account, but loops through for all other AWS accounts and passes in their account IDs to be added as members.

  Member:
    Type: AWS::GuardDuty::Member
    OrganizationBinding: !Ref GuardDutyMasterBinding
    ForeachAccount: !Ref GuardDutyMemberBinding
    Properties:
      DetectorId: !Ref Detector
      Email: !GetAtt CurrentAccount.RootEmail
      MemberId: !Ref CurrentAccount
      Status: Invited
      DisableEmailNotification: true
Enter fullscreen mode Exit fullscreen mode

Once deployed, we can go and look in the Accounts section of the GuardDuty console in the Security account, and we will see all other member accounts listed. GuardDuty findings are automatically sent to CloudWatch Events. Now we will look at how to send a simple notification when an incident takes place. We start off by specifying an event rule on the default Amazon EventBridge.

  FindingRule:
    Type: AWS::Events::Rule
    DependsOn: FindingsTopicPolicy
    OrganizationBinding: !Ref GuardDutyMasterBinding
    Properties:
      Name: !Sub '${resourcePrefix}-guardduty-findings-rule'
      EventPattern:
        source:
          - aws.guardduty
      Targets:
        - Id: FindingsTopic
          Arn: !Ref FindingsTopic
      State: ENABLED
Enter fullscreen mode Exit fullscreen mode

The rule above looks for any event that is published from the GuardDuty service. When a match is found, it will push an event onto an SNS topic. You can have fun looking at different rules with event patterns in EventBridge including using sample GuardDuty findings. You could use the pattern below to trigger a notification for a specific finding type:

  EventPattern:
    source:
      - aws.guardduty
    detail:
      type: 
        - UnauthorizedAccess:EC2/MaliciousIPCaller.Custom
Enter fullscreen mode Exit fullscreen mode

You could use a pattern like the one below to trigger a notification if the severity of the finding is above a certain threshold.

  EventPattern:
    source:
      - aws.guardduty
    detail.severity:
      - numeric:
        - ">"
        - 6
Enter fullscreen mode Exit fullscreen mode

Finally, we define an SNS topic in the cloudformation template, which will be used to trigger an email notification when a finding is received.

  FindingsTopic:
    Type: AWS::SNS::Topic
    OrganizationBinding: !Ref GuardDutyMasterBinding
    Properties:
      DisplayName: GuardDuty Findings
      TopicName: !Sub '${resourcePrefix}-guardduty-findings-notification'
      Subscription:
        - Protocol: email
          Endpoint: !GetAtt SecurityAccount.RootEmail
Enter fullscreen mode Exit fullscreen mode

We can test this out by logging into one of the AWS accounts using the root email address. This is something that should be avoided, and will trigger a GuardDuty finding for RootCredentialUsage.

GuardDuty Findings

This post has touched on a number of AWS services that help with audit and compliance as well as incident detection and response. It is a very broad topic with powerful features available. In the next post, we will start to look at budgets and the world of FinOps and sustainability using the Cost and Usage Reports.

Oldest comments (0)