DEV Community

Cover image for Building a HIPAA-Compliant Chatbot with AWS Lambda & Bedrock
LAVANYA LAHARI NANDIPATI
LAVANYA LAHARI NANDIPATI

Posted on

Building a HIPAA-Compliant Chatbot with AWS Lambda & Bedrock

As a software engineer working in healthcare, I’ve seen first-hand how patients want instant answers but compliance makes everything tricky. We can’t just throw PHI (Protected Health Information) into ChatGPT and call it a day.

So I decided to build a HIPAA-friendly chatbot using AWS Lambda (serverless backend) and Amazon Bedrock (LLM service) — with data masking to keep sensitive info safe.


📌 This post walks you through:

  • Why chatbots in healthcare are challenging 🚑
  • How to design a compliant architecture 🔐
  • Code snippets for AWS Lambda + Bedrock ⚙️
  • Tips for keeping PHI secure 🛡️

🚨 The Challenge: AI + Healthcare = Risk

  • Chatbots are great for FAQs, triage, and scheduling.
  • But if you send raw PHI (like names, MRNs, diagnoses) to an LLM… that’s a compliance nightmare.
  • HIPAA requires minimum necessary access and strict controls.

🏗️ Architecture at a Glance

Flow:

  1. Patient message → API Gateway
  2. Lambda pre-processor → scrubs PHI (names, DOB, SSNs)
  3. Bedrock LLM → processes the masked query
  4. Lambda post-processor → reinserts placeholders if needed
  5. Response → returned to patient securely

Flowchart showing HIPAA-compliant chatbot architecture: Patient → API Gateway → Lambda → Bedrock → Lambda → Patient

API Gateway → Lambda (mask PHI) → Bedrock → Lambda (restore placeholders) → Patient


⚙️ Step 1: Setting up AWS Lambda

A basic Python Lambda handler:

import boto3
import re

bedrock = boto3.client('bedrock-runtime')

def mask_phi(text):
    # toy example: replace dates + names
    text = re.sub(r'\d{2}/\d{2}/\d{4}', '[DATE]', text)
    text = re.sub(r'\b(Alice|Bob|John)\b', '[NAME]', text)
    return text

def lambda_handler(event, context):
    user_input = event['queryStringParameters']['q']
    masked_input = mask_phi(user_input)

    response = bedrock.invoke_model(
        modelId="anthropic.claude-v2",
        contentType="application/json",
        accept="application/json",
        body=f'{{"prompt":"{masked_input}"}}'
    )

    return {
        "statusCode": 200,
        "body": response['body'].read().decode('utf-8')
    }
Enter fullscreen mode Exit fullscreen mode

🤖 Step 2: Talking to Bedrock Safely

  • Always send masked input only.
  • Example:

Input:

John Smith has a fever since 09/21/2025. Should he see a doctor?
Enter fullscreen mode Exit fullscreen mode

Masked:

[NAME] has a fever since [DATE]. Should they see a doctor?
Enter fullscreen mode Exit fullscreen mode

🔐 Step 3: Post-Processing

If you need to restore placeholders (like Hello [NAME]), you can map them back safely from session state.


✅ Why This Matters

  • Patients get instant responses
  • Engineers stay HIPAA-compliant
  • Serverless (Lambda) keeps costs low
  • Bedrock provides enterprise-grade LLMs without exposing PHI

🚀 Next Steps

  • Add DynamoDB to store chat history (encrypted)
  • Plug in Cognito for authentication
  • Expand PHI scrubbing with Amazon Comprehend Medical

🙌 Closing Thoughts

As engineers, we often think "just ship the feature" — but in healthcare, privacy is the feature.

This project taught me that it’s possible to marry AI innovation with compliance if we design carefully.

👉 👉 Repo link: GitHub – hipaa-chatbot-bedrock

Top comments (0)