DEV Community

Cover image for How I Debugged an "Undeletable" AWS Elastic IP and Traced It Back to Redshift Serverless
Prajwal P
Prajwal P

Posted on • Originally published at prajwal-blog.hashnode.dev

How I Debugged an "Undeletable" AWS Elastic IP and Traced It Back to Redshift Serverless

Here's your clean blog ready to paste on Dev.to and Medium:


TITLE:

How I Debugged an "Undeletable" AWS Elastic IP and Traced It Back to Redshift Serverless
Enter fullscreen mode Exit fullscreen mode

COVER IMAGE: Use a screenshot of your AWS console or the diagram we created earlier.


CONTENT — paste exactly as below:


If you work with AWS long enough, you'll eventually hit a resource that looks simple but turns into a deep troubleshooting rabbit hole.

Recently, while cleaning up unused AWS resources to reduce costs, I discovered an idle Elastic IP in my account.

At first glance, everything looked normal:

  • No EC2 instance attached
  • No NAT Gateway
  • No Load Balancer
  • No obvious parent resource

So naturally, I thought: "This should be easy. I'll just release the Elastic IP."

AWS had other plans.


The Problem

When I tried deleting the related VPC endpoint, AWS returned this error:

Operation is not allowed for requester-managed VPC endpoints
Enter fullscreen mode Exit fullscreen mode

That error started one of the most interesting AWS troubleshooting sessions I've worked on — involving ENIs, VPC Endpoints, Security Groups, VPC Flow Logs, CloudWatch Logs Insights, AWS CLI, and Redshift Serverless hidden networking dependencies.


First Clue — ENI With No EC2 Instance

Inside EC2 → Network Interfaces, I found an ENI with these details:

  • Interface Type: vpc_endpoint
  • Status: in-use
  • Requester-managed: true

Immediately, several questions came up:

  • Why does a VPC endpoint have an Elastic IP?
  • Which AWS service owns this?
  • Why can't I delete it?

Understanding "Requester-Managed"

This was the major breakthrough.

When AWS marks a resource as RequesterManaged = true it means the resource exists inside your AWS account, but another AWS-managed service owns and controls it.

That is why I could see the ENI, the VPC endpoint, and the Elastic IP — but could not directly delete them.

This behavior commonly happens with managed AWS services such as Redshift Serverless, EKS, Lambda VPC networking, OpenSearch, SageMaker, ECS/Fargate, and PrivateLink integrations.


Investigating the VPC Endpoint

I inspected the endpoint using AWS CLI:

aws ec2 describe-vpc-endpoints \
  --vpc-endpoint-ids vpce-009f75251024f3fb2 \
  --region ap-south-1
Enter fullscreen mode Exit fullscreen mode

The service name returned was a generated ID — AWS hides backend managed services behind these. Not useful on its own.


Security Group Investigation — The Turning Point

The endpoint was attached to a security group called data-engineering-SG.

So I listed all ENIs using that security group:

aws ec2 describe-network-interfaces \
  --filters Name=group-id,Values=sg-013897824370ebff3
Enter fullscreen mode Exit fullscreen mode

One ENI description contained this string:

711651397674-REDSHIFT
Enter fullscreen mode Exit fullscreen mode

That single word changed the entire investigation.


Confirming Redshift Serverless

I ran:

aws redshift-serverless list-workgroups
Enter fullscreen mode Exit fullscreen mode

The output showed:

"vpcEndpointId": "vpce-009f75251024f3fb2"
Enter fullscreen mode Exit fullscreen mode

Root cause confirmed.

The actual dependency chain was:

Redshift Serverless
        ↓
Requester-managed VPC Endpoint
        ↓
AWS-managed ENIs
        ↓
Elastic IP
Enter fullscreen mode Exit fullscreen mode

The Elastic IP itself was never the real issue. It was simply a networking dependency created by Redshift Serverless.


Bonus Discovery — Internet Bots Were Hitting the Endpoint

Before confirming Redshift, I enabled VPC Flow Logs to verify whether anything inside my VPC was actively using the endpoint.

The logs showed traffic from public internet scanners continuously probing the endpoint because the attached security group allowed all traffic from 0.0.0.0/0.

That traffic made the endpoint appear active even though no legitimate workloads were using it.

First immediate fix: removed all 0.0.0.0/0 inbound rules.


The Fix

Once the root cause was confirmed, I deleted the Redshift Serverless resources.

Step 1 — Delete the Workgroup:

aws redshift-serverless delete-workgroup \
  --workgroup-name demos-workgroup
Enter fullscreen mode Exit fullscreen mode

Step 2 — Delete the Namespace:

aws redshift-serverless delete-namespace \
  --namespace-name demos-namespace
Enter fullscreen mode Exit fullscreen mode

After that, AWS automatically cleaned up the VPC endpoint, the ENIs, and the Elastic IP association.

The "undeletable" Elastic IP disappeared automatically.


Key Takeaways

1. RequesterManaged = true is always a clue
Stop trying to delete the networking resource directly. Find the managed AWS service that owns it.

2. Networking resources are usually symptoms, not root causes
The Elastic IP was not the actual issue. Redshift Serverless was.

3. Security groups can mislead investigations
Internet scanners made the endpoint appear active even though no internal workloads were using it.

4. Follow the dependency chain
Elastic IP → ENI → VPC Endpoint → Security Group → ENI Metadata → Parent Service → Delete

5. VPC Flow Logs are underrated
They helped eliminate false assumptions and saved hours of troubleshooting.


AWS Services That Commonly Create Hidden Networking Resources

  • Redshift Serverless
  • EKS
  • Lambda VPC networking
  • ECS/Fargate
  • OpenSearch
  • SageMaker
  • RDS Proxy
  • AWS Glue
  • PrivateLink / VPC Lattice

Final Thought

What started as "Why can't I delete this Elastic IP?" turned into a VPC networking investigation, a security audit, and a managed-service dependency trace.

The biggest lesson:

In AWS, always delete the parent resource — not the symptom.

The next time you see RequesterManaged = true, your first troubleshooting step should be:

aws redshift-serverless list-workgroups
Enter fullscreen mode Exit fullscreen mode

Have you ever debugged a mysterious AWS dependency caused by a managed service? Drop your experience in the comments.


Top comments (0)