Are your AWS Lambdas doing things twice? Here is how to fix it.
Today, my SQS queue delivered duplicate messages, and my Lambda function sent two emails to the same user. My initial code checked if a lock existed before writing it, but a classic "Race Condition" bypassed it.
The Fix: Use attribute_not_exists in DynamoDB.
Python
import botocore
email_lock_key = f"email_report_{today}_{user_id}"
try:
# ATOMIC OPERATION: Write ONLY if it doesn't exist
cache_table.put_item(
Item={
'cache_key': email_lock_key,
'status': 'sent',
'ttl': int(time.time()) + 86400, # 24h expiration
'user_id': user_id
},
ConditionExpression='attribute_not_exists(cache_key)'
)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'ConditionalCheckFailedException':
print("Race condition intercepted! Lock already exists.")
# Abort duplicate process here
By doing this, you let DynamoDB act as the absolute source of truth. The hardest part? Having to manually delete these cache keys in the AWS Console just to be able to test the system again!

Top comments (0)