Welcome to Day 10. The "pipes" are connected. Now we need a bucket to catch the water.
Today, I designed the storage layer for my AI Financial Agent. I chose Amazon DynamoDB because it fits perfectly with the Serverless ethos (Lambda + EventBridge).
The Database Design (Single Table Concept)
Designing for NoSQL is different from SQL. You don't think about "joins", you think about "access patterns".
Query: "Give me all transactions for User Eric sorted by date."
PK: user_id
SK: timestamp
The Code
Here is how I updated my Lambda to save the mocked data:
Python
import boto3
import uuid
from datetime import datetime
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('FinanceAgent-Transactions')
def save_transaction(description, amount):
item = {
'user_id': 'user_eric_01',
'transaction_date': datetime.now().isoformat(),
'tx_id': str(uuid.uuid4()),
'description': description,
'amount': amount,
'currency': 'EUR'
}
table.put_item(Item=item)
print(f"Saved: {description} - {amount}€")
Why On-Demand?
I configured the table as On-Demand Capacity. In a startup or learning phase, traffic is spiky or non-existent. On-Demand means AWS manages the read/write units automatically. I don't have to guess or provision servers.
Now my data is persistent. It survives the Lambda execution. Tomorrow: The AI integration begins.

Top comments (0)