Security is one of the biggest concerns when adopting generative AI in production. Amazon Bedrock addresses this by providing a highly secure managed service, but like all AWS services, security is a shared responsibility. AWS secures the underlying infrastructure, while customers are responsible for how Bedrock is used within their applications.
In this article, we will break down some AWS Bedrock security best practices, focusing on data protection, encryption, access control, network security, and defenses against prompt injection.
Understanding the Shared Responsibility Model
Security in AWS is split into two clear areas:
Security of the Cloud (AWS Responsibility)
AWS is responsible for:
- Physical data centers and global infrastructure
- Network architecture and availability
- Managed service security for Amazon Bedrock
- Compliance programs and third-party audits
AWS regularly validates its controls through industry-recognized compliance frameworks, giving customers a secure foundation to build on.
Security in the Cloud (Customer Responsibility)
As a customer, you are responsible for:
- IAM roles and permissions
- Network access configuration
- Data sensitivity and regulatory compliance
- Application-level security (including prompt injection protection)
Understanding this distinction is critical when deploying AI workloads with Bedrock.
Data Protection in Amazon Bedrock
One of the most important security guarantees of Amazon Bedrock is how it handles customer data:
- Prompts and completions are not stored
- Customer data is not used to train AWS models
- Data is not shared with model providers or third parties
Bedrock uses Model Deployment Accounts, which are isolated AWS accounts managed by the Bedrock service team. Model providers have no access to these accounts, logs, or customer interactions. This isolation ensures strong data confidentiality by design.
Encryption: In Transit and At Rest
Encryption in Transit
All communication with Amazon Bedrock is encrypted using:
- TLS 1.2 (minimum), with TLS 1.3 recommended
- Secure SSL connections for API and console access
All API requests must be signed using IAM credentials or temporary credentials from AWS STS.
Encryption at Rest
Amazon Bedrock encrypts:
- Model customization jobs
- Training artifacts
- Stored resources associated with customization
This ensures sensitive data remains protected even when not actively in use.
Network Security with VPC and AWS PrivateLink
For workloads requiring strict network isolation, Bedrock integrates with Amazon VPC and AWS PrivateLink.
Best practices include:
- Running Bedrock-related jobs inside a VPC
- Using VPC Flow Logs to monitor network traffic
- Avoiding public internet exposure by using interface endpoints
VPC integration is supported for:
- Model customization jobs
- Batch inference
- Knowledge Bases accessing Amazon OpenSearch Serverless
This approach is especially valuable for regulated industries and internal enterprise applications.
Identity and Access Management (IAM)
IAM is the backbone of Bedrock security.
Recommended IAM best practices:
- Follow the principle of least privilege
- Use dedicated IAM roles for Bedrock access
- Avoid long-lived credentials; prefer AWS STS temporary credentials
- Restrict access at both the service and resource level
IAM is provided at no additional cost and integrates seamlessly with Bedrock.
Cross-Account Access for Custom Model Imports
If you import custom models from Amazon S3 across AWS accounts:
- Explicit permissions must be granted by the bucket owner
- Access policies should be scoped tightly to required actions only
Cross-account access should always be reviewed carefully to avoid unintended exposure.
Compliance and Regulatory Alignment
Amazon Bedrock participates in multiple AWS compliance programs. To verify whether Bedrock meets your compliance requirements:
- Review AWS Services in Scope by Compliance Program
- Cross-reference with your regulatory obligations (HIPAA, SOC, ISO, etc.)
Compliance is a shared responsibility, so proper configuration on the customer side is essential.
Incident Response Responsibilities
AWS handles incident response for the Bedrock service itself. However, customers are responsible for:
- Detecting incidents within their applications
- Responding to misuse or data exposure
- Monitoring logs and access patterns
A clear incident response plan should be part of any production AI deployment.
Protecting Against Prompt Injection Attacks
Prompt injection is one of the most common risks in generative AI systems. While AWS secures the infrastructure, application-level defenses are your responsibility.
Recommended Best Practices
1. Input Validation
- Sanitize and validate all user inputs
- Enforce strict input formats where possible
- Reject or escape unsafe content before sending it to Bedrock
2. Secure Coding Practices
- Avoid dynamic prompt construction via string concatenation
- Separate system prompts from user input
- Restrict permissions using least privilege IAM roles
3. Security Testing
- Perform penetration testing on AI workflows
- Use static and dynamic application security testing (SAST/DAST)
- Test specifically for prompt manipulation scenarios
4. Stay Updated
- Keep SDKs and dependencies up to date
- Monitor AWS security bulletins
- Follow official Bedrock documentation and guidance
Using Amazon Bedrock Guardrails
Amazon Bedrock Guardrails provide a native way to:
- Detect prompt injection attempts
- Enforce content boundaries
- Apply consistent safety rules across applications
Guardrails should be considered a baseline security control for any Bedrock-based application.
Agent-Specific Security Measures
When building Amazon Bedrock Agents, additional protections are available:
- Associate guardrails directly with agents
- Enable default or custom pre-processing prompts to classify user input
- Clearly define system prompts to restrict agent behavior
- Use Lambda-based response parsers for custom enforcement logic
These features significantly reduce the risk of malicious or unintended behavior.
Conclusion
Amazon Bedrock provides a strong, secure foundation for generative AI, but security does not stop at the service boundary. AWS protects the infrastructure, while customers must secure their applications through careful design, guardrails, and ongoing monitoring.
By combining IAM best practices, network isolation, encryption, and prompt injection defenses, organizations can confidently deploy AI solutions that are both powerful and secure.
Security in generative AI is not a one-time setup—it’s an ongoing responsibility.
References
- AWS Partner: Migrating Generative AI Applications to AWS Technical

Top comments (0)