DEV Community

Cover image for Debugging Hidden AWS Costs: From “Others” in Cost Explorer to a Real Root Cause

Debugging Hidden AWS Costs: From “Others” in Cost Explorer to a Real Root Cause

I had an AWS account that was supposed to be almost empty.

Most of the resources had already been deleted. Some cleanup had been done manually, and some of it had been done through tools like Terraform. So the expectation was simple:

No active resources = no meaningful cost

But Cost Explorer was still showing small charges.

The amount was tiny, around a few fractions of a cent, but the problem was not the amount itself. The problem was that the charge was unclear.

In the AWS Cost Explorer UI, part of the cost appeared under vague categories like:

Others

That was not enough to understand what was happening.

The real question was:

Why is an account that looks empty still generating cost?

The actual problem

At first, the charge looked confusing because the UI did not clearly explain the source.

A small AWS bill can still hide important information:

- Is there an orphaned resource?
- Is AWS Config still recording resources?
- Is Control Tower recreating or managing something?
- Is CloudTrail charging money?
- Is a service making API calls in the background?
- Is there still a queue, log group, KMS key, bucket, or automation running?

So the goal was not to save $0.003.

The goal was to prove whether the cost was:

- Expected baseline activity
- Orphaned infrastructure
- Governance-related activity
- Service-generated API usage
- Unexpected user or automation activity
That is the real reason this investigation matters.

The mistake to avoid

It’s not a real mistake but sometimes the first instinct is often to start with CloudTrail.

That feels logical because CloudTrail shows activity and API calls, But CloudTrail does not answer the first billing question.

CloudTrail can tell you:

- Who did something?
- When did it happen?
- Which API was called?
- Which service was involved?

But CloudTrail does not directly tell you:

Which AWS billing line charged money?
So the better approach is to do not start with CloudTrail but start with Cost Explorer API because it tells you what was actually billed, while CloudTrail only tells you what activity happened.

Why Cost Explorer API and not Cost Expolorer console ?, because Cost Explorer API because it lets you query the exact billing dimensions directly, while the Console is better for visual overview but can hide small charges inside grouped categories.

What I did first

I isolated one exact billing day.

For example:

START_DATE="2026-05-06"
END_DATE="2026-05-07"
Enter fullscreen mode Exit fullscreen mode

This matters because AWS Cost Explorer uses an exclusive end date.

So this range means only May 6, 2026. This avoids mixing multiple days together and makes the investigation cleaner.

Then I asked Cost Explorer the right question

Instead of relying on the UI, I queried Cost Explorer by:

- Service
- Usage Type
- Cost
The key command was:

aws ce get-cost-and-usage \
  --time-period Start=$START_DATE,End=$END_DATE \
  --granularity DAILY \
  --metrics UnblendedCost \
  --group-by Type=DIMENSION,Key=SERVICE Type=DIMENSION,Key=USAGE_TYPE \
  --output json | jq -r '
    .ResultsByTime[0].Groups[]
    | select((.Metrics.UnblendedCost.Amount | tonumber) > 0)
    | [
        .Keys[0],
        .Keys[1],
        .Metrics.UnblendedCost.Amount,
        .Metrics.UnblendedCost.Unit
      ]
    | @tsv
  ' | column -t -s "$(printf '\t')"
Enter fullscreen mode Exit fullscreen mode

This changed the investigation.

Instead of seeing only vague UI categories, the API showed the real billing lines.

In this case, the important result was:

AWS Config  ConfigurationItemRecorded       0.003  USD
AWS Config  EUW2-ConfigurationItemRecorded  0.003  USD
Enter fullscreen mode Exit fullscreen mode

That immediately narrowed the problem. The charge was not random.

It was not just “Others.” It was AWS Config.

What the usage type told me

The usage type was: ConfigurationItemRecorded

That means AWS Config recorded a configuration item.

In simple terms:

AWS Config saw a resource or configuration state and recorded it.
That recording generated a billable configuration item.

The second usage type included a region prefix: EUW2-ConfigurationItemRecorded

That pointed to: eu-west-2

So now the investigation had a real chain:

- Service = AWS Config
- Usage type = ConfigurationItemRecorded
- Region = eu-west-2
- Cost = about 0.006 USD total
This was already much better than the original Cost Explorer UI view.

Then I checked the region

To confirm where the cost happened, I grouped Cost Explorer by region and service:

aws ce get-cost-and-usage \
  --time-period Start=$START_DATE,End=$END_DATE \
  --granularity DAILY \
  --metrics UnblendedCost \
  --group-by Type=DIMENSION,Key=REGION Type=DIMENSION,Key=SERVICE \
  --output json | jq -r '
    .ResultsByTime[0].Groups[]
    | select((.Metrics.UnblendedCost.Amount | tonumber) > 0)
    | [
        (.Keys[0] // "NoRegion"),
        .Keys[1],
        .Metrics.UnblendedCost.Amount,
        .Metrics.UnblendedCost.Unit
      ]
    | @tsv
  ' | column -t -s "$(printf '\t')"
Enter fullscreen mode Exit fullscreen mode

Then I grouped by region and usage type:

aws ce get-cost-and-usage \
  --time-period Start=$START_DATE,End=$END_DATE \
  --granularity DAILY \
  --metrics UnblendedCost \
  --group-by Type=DIMENSION,Key=REGION Type=DIMENSION,Key=USAGE_TYPE \
  --output json | jq -r '
    .ResultsByTime[0].Groups[]
    | select((.Metrics.UnblendedCost.Amount | tonumber) > 0)
    | [
        (.Keys[0] // "NoRegion"),
        .Keys[1],
        .Metrics.UnblendedCost.Amount,
        .Metrics.UnblendedCost.Unit
      ]
    | @tsv
  ' | column -t -s "$(printf '\t')"
Enter fullscreen mode Exit fullscreen mode

This step is important because some AWS usage types include region prefixes, and others do not.

At this point, the direction was clear: Investigate AWS Config in eu-west-2

Then I checked AWS Config itself

Since Cost Explorer showed AWS Config, the next step was to inspect AWS Config directly.

Write on Medium
I checked whether a configuration recorder existed:

REGION="eu-west-2"

aws configservice describe-configuration-recorders \
  --region $REGION \
  --query 'ConfigurationRecorders[*].{Name:name,RoleARN:roleARN,AllSupported:recordingGroup.allSupported,IncludeGlobal:recordingGroup.includeGlobalResourceTypes}' \
  --output table
Enter fullscreen mode Exit fullscreen mode

Then I checked whether it was recording:

aws configservice describe-configuration-recorder-status \
  --region $REGION \
  --query 'ConfigurationRecordersStatus[*].{Name:name,Recording:recording,LastStatus:lastStatus,LastStatusChange:lastStatusChangeTime}' \
  --output table
Enter fullscreen mode Exit fullscreen mode

Then I checked discovered resource counts:

aws configservice get-discovered-resource-counts \
  --region $REGION \
  --output table
Enter fullscreen mode Exit fullscreen mode

And then listed discovered resources:

{
  printf "RESOURCE_TYPE\tRESOURCE_ID\tRESOURCE_NAME\n"

  for type in $(aws configservice get-discovered-resource-counts \
    --region $REGION \
    --query 'resourceCounts[*].resourceType' \
    --output text); do

    aws configservice list-discovered-resources \
      --region $REGION \
      --resource-type "$type" \
      --output json | jq -r '
        .resourceIdentifiers[]
        | [
            .resourceType,
            .resourceId,
            (.resourceName // "-")
          ]
        | @tsv
      '

  done
} | column -t -s "$(printf '\t')"
Enter fullscreen mode Exit fullscreen mode

This step answers:

  • Is AWS Config enabled?
  • Is it recording?
  • What resources does it still know about?
  • Is this coming from a managed baseline?

Then I used CloudTrail, but only after Cost Explorer

After identifying AWS Config as the billing service, CloudTrail became useful.

The purpose of CloudTrail was not to find the cost. The purpose was to correlate activity.

I checked events in the same region and same day:

aws cloudtrail lookup-events \
  --region $REGION \
  --start-time "${START_DATE}T00:00:00Z" \
  --end-time "${END_DATE}T00:00:00Z" \
  --query 'sort_by(Events,&EventTime)[*].[EventTime,Username,EventName,EventSource]' \
  --output table
Enter fullscreen mode Exit fullscreen mode

Then I filtered for AWS Config activity:

aws cloudtrail lookup-events \
  --region $REGION \
  --lookup-attributes AttributeKey=EventSource,AttributeValue=config.amazonaws.com \
  --start-time "${START_DATE}T00:00:00Z" \
  --end-time "${END_DATE}T00:00:00Z" \
  --query 'sort_by(Events,&EventTime)[*].[EventTime,Username,EventName,EventSource]' \
  --output table
Enter fullscreen mode Exit fullscreen mode

Then I checked the actor details:

aws cloudtrail lookup-events \
  --region $REGION \
  --start-time "${START_DATE}T00:00:00Z" \
  --end-time "${END_DATE}T00:00:00Z" \
  --output json | jq -r '
    .Events[]
    | .CloudTrailEvent
    | fromjson
    | [
        .eventTime,
        .eventSource,
        .eventName,
        .userIdentity.type,
        (.userIdentity.arn // "-"),
        (.userAgent // "-"),
        (.sourceIPAddress // "-")
      ]
    | @tsv
  ' | column -t -s "$(printf '\t')"
Enter fullscreen mode Exit fullscreen mode

This helped separate possible sources:

- Human user
- AWSReservedSSO role
- AWS service
- Control Tower execution role
- CloudFormation StackSet
- Scheduled automation
- Application activity

What I discovered

The investigation showed that the paid service was AWS Config.

CloudTrail itself was not the source of the charge, this was an important correction.

The notes explicitly state that AWS Config caused the charge, while CloudTrail showed FreeEventsRecorded = 0 USD. So CloudTrail helped investigate the activity, but it was not the paid service.

The root cause chain became:

Cost Explorer UI showed unclear “Others”
→ Cost Explorer API exposed AWS Config charges
→ Usage type showed ConfigurationItemRecorded
→ Region prefix EUW2 pointed to eu-west-2
→ AWS Config showed discovered resources / recorder activity
→ CloudTrail helped correlate activity
→ Conclusion: AWS Config recording caused the cost

The final conclusion

The small charge was not random.

It came from: AWS Config recording configuration items

The likely reason was: Control Tower-managed baseline governance

So the account was not necessarily “dirty” with normal application resources. Instead, it still had governance/baseline services active.

That is a different type of problem.

The solution is not always to delete a bucket, queue, or instance. The solution may be to understand whether the account is still governed by Control Tower and whether AWS Config is intentionally managed as part of that baseline.

Why this investigation was worth doing
For a one-time charge of $0.003, spending a lot of time would not make sense.

But for a repeated tiny charge in an account that should be clean, it is worth investigating, not because of the money.

Because it answers:

- Is something still running?
- Is the account still governed?
- Is cleanup incomplete?
- Is a service being recreated?
- Is an automation still active?
- Is the billing source expected or unexpected?
In this case, the investigation was useful because it corrected a wrong assumption.

→ The charge was not CloudTrail.

→ The charge was AWS Config.

The reusable debugging model

The final model is:

  • Cost Explorer API = what was billed
  • Usage Type = why it was billed
  • Region grouping = where it was billed
  • Service API = what exists
  • CloudTrail = who or what acted
  • RCA = final explanation and mitigation This makes the process reusable for other services too.

For example:

KMS-Requests
→ Check KMS keys and CloudTrail KMS events

TimedStorage-ByteHrs
→ Check CloudWatch log groups and retention

Requests-Tier8
→ Check the related service API and CloudTrail events

DataTransfer-Out-Bytes
→ Check VPC, NAT Gateway, S3, CloudFront, ELB, or other traffic sources

The root cause format

A clean RCA should look like this:

Root cause:
AWS Config generated cost because ConfigurationItemRecorded occurred in eu-west-2 on May 6, 2026.
Evidence:

  1. Cost Explorer showed: AWS Config / ConfigurationItemRecorded / 0.003 USD AWS Config / EUW2-ConfigurationItemRecorded / 0.003 USD
  2. Region grouping showed: eu-west-2
  3. AWS Config API showed: Configuration recorder and discovered resource state
  4. CloudTrail showed: Related activity and actor context Impact: About 0.006 USD on May 6, 2026. Conclusion: The charge was expected if the account is still managed by Control Tower baseline governance. Mitigation: Leave it if Control Tower governance is required. If the account is being retired, remove it properly from Control Tower governance. Do not manually delete Control Tower-managed resources unless the governance impact is understood. ## Final takeaway

The issue was not the tiny cost.

The issue was lack of attribution.

The plan solved that by turning an unclear billing symptom into a technical explanation:

  • AWS was not charging randomly.
  • Cost Explorer UI was only unclear.
  • The API exposed AWS Config as the paid service.
  • The usage type explained the billing action.
  • The region pointed to eu-west-2.
  • CloudTrail helped confirm activity, but was not the cost source. That is why this debugging chain is useful.

Conculsion

This investigation showed that the real problem was not the small amount of money, but the lack of clear attribution. Cost Explorer Console gave a vague view, while the Cost Explorer API exposed the exact billing lines: the charge came from AWS Config recording configuration items, not from CloudTrail. CloudTrail was useful only after the billing source was identified, because it helped correlate activity and actors. The correct debugging chain is therefore: start with Cost Explorer API to find what was billed, use usage type and region to understand why and where it was billed, then use service APIs and CloudTrail to explain the technical root cause.

Top comments (0)