Hello World, AI 🧠
Welcome to Day 11. Today, we successfully connected our serverless backend to a Foundation Model.
I chose Amazon Bedrock because it allows me to access models like Titan, Claude, or Llama via a simple API call, without managing GPU servers.
The Code (Python Boto3)
Here is the working snippet to invoke Amazon Titan Text Express. Note: The JSON structure for Titan is different from Claude!
Python
import json
import boto3
1. Setup the Client (us-east-1 usually has all models)
bedrock = boto3.client(service_name='bedrock-runtime', region_name='us-east-1')
def lambda_handler(event, context):
user_prompt = "I spent $90 on Cloud Servers. Is this a good investment?"
# 2. Titan-Specific Payload
# Titan expects 'inputText', not 'messages' like Claude.
body = json.dumps({
"inputText": user_prompt,
"textGenerationConfig": {
"maxTokenCount": 512,
"temperature": 0.7, # Creativity level
"topP": 0.9
}
})
try:
# 3. Invoke the Model
response = bedrock.invoke_model(
body=body,
modelId="amazon.titan-text-express-v1",
accept="application/json",
contentType="application/json"
)
# 4. Parse Response (Titan returns 'results' -> 'outputText')
response_body = json.loads(response.get("body").read())
ai_reply = response_body.get('results')[0].get('outputText')
return {
'statusCode': 200,
'body': json.dumps(ai_reply)
}
except Exception as e:
return {'statusCode': 500, 'body': str(e)}
Key Learnings
Region Matters: I kept my Lambda in Stockholm but pointed the Bedrock client to N. Virginia (us-east-1) to ensure model availability.
Guardrails: Amazon Titan is very "safe" by default. It refused to give personalized financial advice, which is actually a good feature for a fintech app compliance.
See you on Day 12!

Top comments (0)