DEV Community

Ayyanar Jeyakrishnan
Ayyanar Jeyakrishnan

Posted on

Building AI applications on AWS Bedrock with enhanced security measures

Great session with Andrew Kane and Mark Ryland on building AI applications on AWS Bedrock with enhanced security measures.

https://www.youtube.com/watch?v=5EDOTtYmkmI&list=PL2yQDdvlhXf_kZMl0XZYqWfysycSXCJx8&index=40

Here are my top key takeaways from this session:

1) Access to Bedrock is facilitated through APIs powered by foundation models from Amazon Titan, Anthropic, and Stability.ai.

2) Sagemaker training/pipeline can be utilized for fine-tuning the base model provided by the model provider.

3) Fine-tuned data resides in your account, while the fine-tuned model is stored in the Model Provider Escrow Account, owned and operated by AWS. However, you can securely use your fine-tuned model through a Single-tenant endpoint, ensuring exclusive consumption.

4) All data transmitted is secured using TLS1.2, and data at rest is protected by AWS KMS (Key Management Service).

5) Prompt data and its outputs are not used by AWS, and the prompt history is stored securely.

6) IAM provides role-based access control (RBAC) for managing model access permissions.

7) #AWS CodeWhisperer significantly improves developer productivity as an AI code companion and enables one-click security scans of your code.

8) Additionally, AWS Sagemaker MLOps, distributed training of LLM, and Foundation Model from Sagemaker Jumpstart empower experimentation before utilizing models from AI21 Labs, LightOn, Stability AI, Cohere, Amazon Titan, and facilitate easy single-click deployments.

Top comments (0)