Introduction
When defining an AI integration, one of the first concerns that usually comes up is security. More specifically, how to protect applications that rely on large language models once they are exposed to real users.
Amazon Bedrock makes it easy to work with foundation models without worrying about infrastructure and allows some level of customization to fit business needs. However, that convenience also raises an important question: how do we prevent these models from generating unsafe content or leaking sensitive information?
This is where Guardrails become especially relevant. Guardrails serve as a safeguard layer, allowing you to filter sensitive data such as PII, restrict specific topics, and define how the model should behave when a rule is violated.
Given their importance for making AI workloads production-ready, this article focuses on a practical, step-by-step implementation of topic filtering and PII protection using Amazon Bedrock Guardrails, both from the AWS Console and programmatically.
Guardrail setup on AWS Console
This section covers configuring Guardrails in the AWS Console, followed by a programmatic approach using the Serverless Framework.
- Before starting, make sure that Amazon Bedrock is enabled in your AWS account. Once enabled, navigate to Amazon Bedrock, go to the Build section, and select Guardrails. From there, click on Create guardrail to begin the setup process.
During the creation process, you will be asked to provide:
- A name to identify the guardrail
- A short description explaining its purpose
- A default message that will be returned whenever a prompt or response is blocked
- Once the basic configuration is completed, the next step is defining denied topics. In this example, two topics are restricted: medical and financial queries.
To add a denied topic, select Add denied topic and provide a name, a short definition, and the action to apply for both input and output. For this setup, any prompt related to these topics will be blocked.
You will also need to add example phrases. These examples help Bedrock identify when a prompt belongs to a restricted topic and improve the accuracy of the filtering.
- After configuring topic restrictions, continue to the PII filtering section. In Step 5, select Add new PII to configure sensitive information detection.
Bedrock provides a set of predefined PII types that can be selected individually, along with an action for each one. In this case, the selected PII types will be masked rather than blocked.
- In addition to predefined PII categories, Guardrails allow you to define custom filters using regular expressions. This is useful when dealing with country-specific identifiers that are not covered by default.
For this example, a custom regex pattern is added to detect Peruvian national ID numbers (DNI) and mask them when detected.
- This is the final configuration of the sensitive information filters, so let's wrap up the creation.
- Once all sensitive information rules are configured, review the final setup and complete the guardrail creation process.
After the guardrail is created, Bedrock will display its details, including the guardrail ID. To start using it, a version must be created, as both the guardrail ID and version are required for programmatic usage.
Serverless implementation
To demonstrate a programmatic implementation, this project uses TypeScript and the Serverless Framework to expose a simple HTTP POST endpoint.
The API processes user prompts through an Amazon Bedrock foundation model while enforcing the previously created guardrail. The guardrail ID and version are passed as configuration values and are required for the request to be evaluated against the defined rules.
Testing and results
The testing strategy consists of two parts. First, the guardrail is tested directly from the AWS Console using the prompt tool with the Claude 3.5 model. Prompts related to healthcare or financial topics are correctly blocked, which can be verified by enabling the Trace option and inspecting the blocked topic information.
PII filtering can be tested similarly. When sensitive information is detected, it appears under the Sensitive information rules section with a Masked status, including the custom DNI regex.
The same behavior is observed when testing the serverless API using Postman. Since the Lambda function targets the same guardrail and model configuration, the results are consistent with those seen in the AWS Console.
On the other hand, we will test the PII filtering with the same tool and will appear under "Sensitive information rules" with the "Masked" status.
For instance related to the peruvian ID it will show the result under the same section of PII filtering.
By contrast, testing the lambda which is using the previous model and targets the same guardrail, it will work the same. Here's a quick result, for a thorough testing the code repository can be use to test as it will use the same strategy for the AWS Console.
Conclusions
Guardrails turned out to be an easy and practical way to put clear boundaries around generative AI workloads in Bedrock. Instead of handling every edge case in code, you can rely on a dedicated layer to block unsafe topics and protect sensitive data by default.
The setup is straightforward, works consistently from the console and from code, and fits naturally into a serverless architecture. While it doesn’t replace application-level validation, it significantly reduces risk and complexity when moving AI features closer to production.
Resources
- GitHub Repository: bedrock-guardrails-demo
- AWS Documentation: Bedrock Guardrails User Guide
- Serverless Framework: serverless.com
Connect with Me
If you found this helpful or have questions about implementing Guardrails in your projects, feel free to reach out:
- LinkedIn: https://www.linkedin.com/in/walter-fernandez-sanchez-a3924354
- GitHub: @wfernandezs


















Top comments (1)
Nice post 🚀