🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Modern Secrets: A Journey from Legacy Systems to AI-Ready Security (SEC213)
In this video, Ritesh Desai and Zach from AWS discuss secrets management architecture patterns, covering centralized versus decentralized approaches for creation, lifecycle management, and storage. They explain AWS Secrets Manager's capabilities including replication across regions, 10,000 TPS retrieval, and the open-source Secrets Manager Agent for caching. Jake Farrell from Acquia shares their journey managing 300,000+ unique secret paths across hundreds of AWS accounts, handling 3.2 million ephemeral pod events daily with consistent 60,000 API calls per hour. The session concludes with recent launches: EKS add-on for Secrets Store CSI provider and managed external secrets enabling one-click rotation for third-party secrets like Salesforce, eliminating custom Lambda functions.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction to Secrets Management at AWS re:Invent
Hello everyone. I'm glad you were able to make it. It's the first day of re:Invent, so there's a long time to go. Hopefully you're drinking your liquids and getting ready for the week ahead. My name is Ritesh Desai, and today we're going to talk about secrets management and its journey, specifically about how you want to build your architecture while keeping Secrets Manager in mind. With me is Zach, who will also be talking about secrets management, specifically about AWS Secrets Manager and how to apply it within your architecture. We also have Jake with us, a customer from Acquia, and he's going to talk about their journey through the secrets management lifecycle and how the architecture enables you to build a more modern secrets management solution.
Like I said, the agenda will cover secrets management overview. I'll talk through a bit about AWS Secrets Manager, and then Zach will go into more detail about deployments at scale. As I mentioned, Jake will talk through their journey, which I think will be the most interesting part of this presentation. We'll hopefully close out with some of the latest launches that we have for re:Invent and before re:Invent.
As you know, there are many different kinds of secrets. Specifically, as you can see, there are a bunch of secrets that people think about in one whole model and say this is what I want to manage. However, AWS thinks about it differently. We build purpose-built services specifically for secrets that need to get managed. For example, AWS credentials would be managed through AWS IAM. For encryption keys, KMS, or Key Management Service, is the enabled service. But for all of these other secrets—database credentials, application credentials—AWS Secrets Manager is essentially the default choice at AWS.
AWS Secrets Manager: Core Features and Capabilities
So what is Secrets Manager? It is a secret lifecycle management service that essentially enables default secure storage. It integrates with multiple other AWS services. For example, it integrates with AWS CloudTrail to ensure that all auditing and monitoring is in place for every action taken on a secret—creation, mutation, deletion—the entire lifecycle is managed. It integrates with Lambda to ensure that we have automated rotation for your secrets. You can go into Secrets Manager and configure rotation by creating a Lambda function and scheduling it to run every 30 days, every 90 days, or whatever your organizational policies are.
We launched about three years ago a feature called replication. Customers told us that they wanted a solution for business continuity where an RDS database, for example, fails over to another region but the secret also needs to be available in that region. We built a feature that enables you to replicate your secret across multiple regions. It is the most used feature of Secrets Manager at this point in time, and replication is one of the things that people talk to us about from a business continuity or disaster recovery perspective.
We are continuing to look at areas where we can make it easier for customers to retrieve their secrets. When I say retrieval, I mean at scale. Today, Secrets Manager supports 10,000 TPS for get secret value, and we continue to enhance that and stay ahead of customer expectations. There's no customer that comes close to that, but we continue to build more capabilities to enable retrieval at scale. In addition to that, we try to make the whole experience easier. We launched Secrets Manager Agent, which is open source and available where customers can enable the retrieval of secrets through automated caching, creating a TTL, and all of those configurations that you need. That's another launch that happened in late 2024, and it's been relatively quickly adopted by customers. We are trying to enable that kind of ease for developers as they try to manage secrets.
At a high level, this is how Secrets Manager works. I'm sure some of you know about this, but when you go in and create any secret in Secrets Manager
by default it will be envelope encrypted with an AWS managed key. There is no secret that is just in plain text. There will always be a KMS key that encrypts your secret and stores it. If you want to create your own customer managed KMS key, that is also possible. Many customers use that functionality. One of the key benefits of using a KMS key is that it enables cross-account access. Some customers come to us and say they need to create a secret in a single account and then have applications from different accounts trying to access those secrets. That is not possible if you have just an AWS managed key.
I think Zach will talk in the later slides about how when customers think about centralized or decentralized approaches to secrets management, those considerations really matter. Amazon CloudWatch and AWS CloudTrail are key aspects of compliance. Customers continue to use those to build listening agents or set up alarms on anomalies or any activity that happens on a secret that they need to track or alert their teams on. That is one of the important aspects of secrets management. I am going to let Zach talk through the centralization and decentralization aspects of it.
Centralized vs. Decentralized Approaches: Understanding the Framework
Thanks, Ritesh. As Ritesh just alluded to, I will introduce myself. I am Zach, a Principal Security SA. That means I focus on cryptography and our secrets management services. I talk to customers pretty much every day about keys, secrets, and certificates. As Ritesh alluded to, one of the first questions we usually get is what do I do with these secrets? Should I centralize them in a single account? Should I put them across a number of accounts? I have 50, 100, or 1000 AWS accounts. How do I do this?
Originally, I have been doing this for about five years. We would tell customers that they should take a decentralized approach and put the secrets close to where their application workload is. Over time, as I talked to more and more customers, I found that many customers actually want to take a more centralized approach. Maybe they are a financial services customer who tends to want that centralization where they have strong observability and centralized control over their secrets. Or maybe they have specific compliance requirements. Over time, our guidance has evolved from saying every customer should decentralize their secrets management to recognizing that it depends on what your requirements are and what you want to get out of your secrets management system.
There are a number of areas or pillars that you should consider. One is creation. How do your developers or engineers create their secrets? How do you manage those secrets? Things like how do you rotate them and how often? Do you replicate them to multiple regions? How do you keep track of versions? Secrets Manager keeps track of the versions, but how do you manage those versions? There is also consumption or retrieval. How do developers get access to those secrets? How do your applications get access to those secrets? And then the through line of all of this is observability and auditability. How do I know what is happening with my secrets and who is accessing them and when?
Secret Creation Models: Comparing Centralized and Decentralized Strategies
As I said before, it kind of depends. I am going to talk through these pillars, both the centralized and the decentralized versions, and think about what makes sense for you or what you already do in your own environments. Talking about creation of secrets, you can centralize that. Your developers have a CI/CD pipeline that they have to use. They have a Terraform module or a specific way that they need to generate and create secrets in all of your workloads. You might do that in a centralized way. I have also seen customers that create an abstraction layer where their developers are calling an API that they create, which on the back end is calling some Secrets Manager API.
There are a number of ways to do this centralized approach. It is pretty popular, especially with financial services customers or highly regulated customers. In this approach, the advantages are that you have consistent policies around naming and tagging and how access control is done because everybody is doing it the same way through the same system. On the other hand, you also have more initial overhead because if you are building an abstraction layer, you have to write that code and maintain it.
There is a lot of setup involved in configuring this whole thing before anyone actually gets started using it. That overhead extends to new releases. When Secrets Manager launches a new feature or API, you're going to have to update that abstraction layer to support that API for your own developers. There is obviously some overhead with this approach, but it does give some customers that want it a lot more centralized control over how their secrets are managed.
When we look at this next model of decentralized creation of secrets, I mean putting them in the accounts closer to where the workloads live. This is probably the most common one in terms of creation. Your developers, maybe they have an application account that they own, and they use the standard AWS SDK, AWS CLI, or the console to go and create these secrets. They could also do it through CI/CD as well. In any case, they're doing it specifically in their own account, and the application developer is managing it.
The trade-offs you're seeing are that it is simpler to adopt. You don't have to build that upfront abstraction layer or that portal that they might use to create those secrets, and they have more flexibility and more control. If there are certain things your developers want to do, if they want to name their secrets in a particular way or they have specific access control requirements, typically your developer or your app teams will understand their own access requirements better than some centralized team that may or may not know anything about that application.
There are obvious advantages to this, but there are trade-offs. You have additional effort to implement these controls. With the previous model of centralized deployments or centralized creation, you can have a pretty granular set of controls on how secrets are named, how they're tagged, and how the access control works on each of these secrets. In the decentralized model, your developers or your app teams are in control of that, so you may not get consistency across all of your different accounts and all your different apps on how this stuff is named and how the access control works.
Another piece is logging. If you have a centralized security team, they're not necessarily going to have access to all of those CloudTrail logs about when those secrets are created in the various application accounts unless you're pumping these into a centralized security logging account, which by the way is the best practice and I would recommend. Not all customers do that, so you're not necessarily going to have the same level of visibility across the account or across all your secrets across all your accounts.
Lifecycle Management and Rotation: Trade-offs Between Control and Complexity
When we think about lifecycle management, I'm talking about rotation and managing different versions. In this way, you can do it in a centralized model where you have a management account that one team controls, potentially an engineering team or a security engineering team. They have a set of Lambda functions that can be used for rotation. Even though those secrets don't live in the same account, your rotation Lambda is able to, with the right permissions, go in and rotate that secret on the database and rotate that secret in Secrets Manager. This is something we see customers do. In fact, there's a really interesting re:Invent talk from last year where one of our customers talks about how they do this centralized management of secrets.
Some of your advantages here are that developers don't have to worry about rotation. If your developers are focused on building value into your applications, they don't necessarily want to think about secrets management, and they particularly don't want to think about how they're rotating all their secrets across all their databases. You can offload some of that burden from your developers, and some centralized team whose job it is to think about secrets and how to rotate secrets can handle that.
Your compliance team might have a very specific requirement around needing all of your secrets to be rotated, and they need to see that in action. They need that to be proven to them basically. This model might make that a bit easier, where you have all your rotation Lambda functions in the same place. Anyone can go and examine them and see how this is being done and get the logs around those Lambda functions being run and the rotations happening.
The main trade-off here is that this requires pretty complex permissions to set up. Think about what those rotation Lambda functions are doing. What this Lambda is basically doing is setting the secret value in Secrets Manager and then logging into a database and then changing that value to match it on the database. It's a little bit more complicated than that, but that's the basic idea. In order to do this from a centralized account, that Lambda has to have permission both to access the secret in this workload account, but also has to be able to access that RDS database and log in and actually change the password.
The more common model is decentralized lifecycle management, where your application team creates the rotation Lambda for their databases or whatever workload and secrets they're managing. They design and maintain that Lambda themselves, giving them control over compliance requirements. There's a feature in Secrets Manager called rotation windows where you can specify that a secret should only be rotated on certain days at certain times, such as every Wednesday between 12:00 and 2:00 a.m. Your application team likely has a better understanding of when their application experiences less traffic and when it would be best to rotate secrets.
This model provides more control over when and how rotation happens without requiring cross-team permission sharing. You don't need to ensure that every new account that gets created has Lambda rotation function access to that account and its secrets, making it much less complex in terms of permission management. The trade-off is similar to the centralized creation model: you have less visibility because the logs around rotation are in those specific workload accounts instead of a central place. Your security and compliance team has less visibility into how this works. They can't simply log into a single account with a read-only role to see what's happening, what Lambdas are being used for rotation, and how often it's occurring. You either need to get those logs into a central place where they can view them or give them read-only access roles to all the different accounts to understand how lifecycle management is taking place.
Storage and Consumption Patterns: Balancing Security with Accessibility
Then we move to consumption or retrieval, which might also be called storage, where you store the secrets. The centralized approach means all your secrets are in one single account. The advantages are that you have full control over all the secrets in the organization in one place. If you have a central security team managing these, it's much easier for them to view and manage all the secrets in the same account from a permissions perspective. You also have centralized logging where every log for these secrets is in the same place and you can view those easily without needing any special CloudTrail setup.
However, there is an impact radius risk to consider. All your secrets are in one account in one place, and if that account is ever compromised, all your secrets are exposed. That is a potential trade-off you want to consider, and you want to very tightly restrict access to that account. The other significant consideration that customers don't always think about is that if you have all these secrets stored in the same account, you aren't able to take advantage of some of the really cool features that Secrets Manager has, like managed rotation.
Very briefly, we have an integration with RDS where when you create an RDS database, you can click a single checkbox in the console or set a single parameter in the CLI or APIs that says you don't want to manage this admin secret. You want AWS to manage it and rotate it for you. AWS generates the secret, nobody ever sees it as a human being, and we rotate it on your behalf with no managing Lambdas or anything like that. I've had customers say this is great and they want to use it, but all their secrets are in the same account and the RDS databases are getting created in another account, a workload account. In that case, you can't use that feature while maintaining the same security profile with centralized secrets because when you click that checkbox and create the RDS instance, that secret gets created in the account where the RDS lives.
I've had some customers who want to take advantage of both but end up in a situation where they have almost all their secrets in one place except for these admin secrets for RDS, which are in the other account. So there are trade-offs with the centralized model, and some of the features won't necessarily work the way you intend. The decentralized version puts all the secrets in the account where the workload lives, and they're separated in a logical way for application teams. A lot of customers have a single account per application or workload, so your secrets live there, your databases live there, your compute lives there, and it's pretty logically separated.
This makes access control a little bit easier and permissions management a little bit easier. As I pointed out before, not always, but almost always I would say application teams are going to understand their own access control requirements better than some centralized team. In my view, it's often better to have application teams own the permissions because they're going to be able to set least privilege permissions perhaps better than a security team that doesn't always know exactly what's going on with a particular workload.
From a trade-off perspective, there is a little bit of less visibility. Things like logs are all created in that account, so you have to do some work to make sure that you're putting all those logs into a centralized security account so that your investigation teams, your SOC, or whoever needs to review those logs are able to get to them in an easy way.
Hybrid Models and Resilience Through Secret Replication
The other thing I want to point out, and this is perhaps more common than customers taking one approach or the other, is that you can combine these approaches into more of a hybrid model. This is probably the more common model I see when customers combine the approaches. Maybe the creation and the management of the secret is done through a centralized account. Maybe you have a specific pipeline that's supposed to be used, or as I've mentioned, I've seen customers use a specific Terraform module that they want their developers to use to generate these secrets, and then they'll also handle rotation from that place as well.
The storage might be decentralized or where the workload is, and there are other ways you could do this. You could do centralized creation but not management or storage. You could centralize the storage but not the creation. I think this model makes a lot of sense for customers often because your developers are creating the secrets in a very consistent way. The naming, the tagging, and the permissions can be made consistent through this centralized mechanism.
The way you're rotating secrets can be consistent, so it's a little bit easier to comply with internal standards or even regulations and prove that all of your developers create these secrets the same way and we rotate them the same way. We have the logs and we have the information on how that's all done in one place. Your application teams are still able to easily access those secrets from, say, their ECS cluster or Lambda or whatever compute they're using without having to worry about centralized permissions.
When you have a centralized permission model, unless you have a lot of automation in place, you're often going to have to cut a ticket to a security team and say, "Can you give me access to this secret I just created?" That's obviously not going to be the most agile solution for developers. I think having a mix of the centralized approach for creation and maybe rotation gives you that consistency, whereas decentralizing the storage and having them where the workload lives makes it a little bit easier for developers. Definitely possible to combine those approaches. I don't want to make it seem like a binary choice. I just want to give you some examples and a mental model to think about how some of this works.
Lastly, I want to talk a little bit about resilience, and then we'll get to what Ritesh said is the most exciting part of the presentation where Jake is going to talk about Acquia. Regarding resilience, customers will often ask, "I need to make sure these secrets are accessible even if something happens and I can't access a particular AWS region." As Ritesh mentioned earlier, we have a feature called secret replicas or secret replication. You create a primary secret, say, in US East 1 or US East 2, and you're able to create replicas in as many other regions as you'd like. It's basically a discrete secret with its own ARN and its own resource policy associated with it, but the value of that secret is the same.
When you rotate the primary version of the secret, that change replicates to all the others as well. You can see on the screen a very simple sample application where you have a Lambda that needs to get access to an RDS database to query some data and needs a database password to do that. In this model where we've replicated that secret to US West 2, a secondary region, even if you're not able to access that secret in US East 1, the Lambda in US West 2 is going to be able to grab that secret and use it to query the cross-region read replica.
I just want to make it clear that we have the ability to replicate these secrets so that for your resilience and global workloads, we're able to have versions of that secret in different AWS regions so you can access them when you need them. With that, I will pass over to Jake, who's going to talk about Acquia's journey.
Acquia's Platform: Powering Digital Experiences at Scale
Thank you. Good afternoon, everyone. It's a real privilege to be here today with Zach and Ritesh from the AWS Secrets Manager team. My name is Jake Farrell. I'm the Senior Director of Engineering Architecture at Acquia. Over the past 12 years, I've been designing and creating scalable back-end infrastructure systems that enable our customers to build some of the world's most unique websites and digital experiences.
Today, I'm excited to share with you some of Acquia's experiences and our secrets journey, and how a close partnership with AWS Secrets Manager has unlocked new possibilities.
At Acquia, we believe in the power of community and giving our customers the freedom to innovate and build impactful experiences as they see it. Open source is built into our DNA. Acquia started with a focus around supporting the content management system Drupal and its community. The belief in open source transcends our company and has led Drupal to be one of the most popular web content management systems for enterprises.
Outside of Drupal hosting, Acquia offers companion supporting services including search, machine learning, personalization, automated marketing campaigns, digital asset management, and AI services. We're the number one contributor back to the Drupal open source project, and many of our employees are highly engaged in other open source communities. Customers build their digital experiences on Acquia because we provide the most secure, easiest to deploy, simplest to manage platform, ready for scale when the moment matters.
The largest media companies trust running their digital experiences on Acquia, including NBC Sports. With over 30,000 sporting events each year, managing content can be a challenge. From golf to the Olympics, NBC reimagined the fans' experience and raised the bar. At the last Olympics, NBC Sports with Acquia had thousands of content and media updates with billions of total streaming minutes served and was viewed by millions of unique visitors.
In the food industry, Wendy's partners with us for a digital transformation to move faster and gain better customer insight for web and mobile orders, utilizing machine learning. Acquia's personalization service has helped Wendy's engage at new levels with their customers through A/B testing of their different product combinations for their website. Molson Coors was able to key on aspects of that A/B testing and allow for a unique reduction of key components that enabled their marketing and branding to create flash pages instantly, reducing their total cost of ownership.
Acquia is in the business of enabling companies to create digital experiences. The largest organizations across every industry, from entertainment, retail, education, and government, trust Acquia to securely run and scale to meet their needs when it matters most. So what powers all of these experiences? A reliable cloud-native platform built on AWS that meets the highest security and compliance standards for developers.
Built with a Kubernetes backbone, we support managing services from the Acquia platform, which acts as a building block for the best-in-class digital experiences that can be imagined. Instead of a one-size-fits-all approach, Acquia Cloud allows customers to assemble the perfect solution to meet their unique business needs, fostering better collaboration and faster innovation across their entire organization.
Acquia's Secrets Management Architecture: From Challenge to Solution
So what led us to leverage AWS Secrets Manager? Our platform has scale and security at its core, and not all secrets are the same. Their visibility, their lifecycle, and their behavioral patterns vary. As we shifted to Kubernetes, we had to account for this full range. There are many types of secrets when you look at critical bootstrap configurations, database credentials, and TLS certificates that are needed to get a service off the ground and running.
Then there are secrets for connecting remote systems, environment overrides, application tuning, and general user-based secrets for the application space. This complex landscape exists in almost every application in a Kubernetes environment. At scale, challenges naturally begin to emerge, and we identified four core aspects to solve for that sprawl.
Managing unique secrets across different stores becomes very difficult. Without a centralized strategy, it becomes chaos. Security is another critical aspect. You couldn't just lift and shift from an old paradigm into this new security model. We needed something that would integrate seamlessly into a Kubernetes-native environment where pods, service accounts, and namespaces could all be accessed and used.
Automation was also essential. We wanted to make sure that we could have automation for rotation and injection with zero downtime. Finally, compliance was crucial. Adhering to our strict FedRAMP compliance and controls, we needed fine-grained access control as well as a clear audit trail. These challenges of sprawl, security, automation, and compliance form the foundation for our strategy.
This foundation and strategy is important for delivering a strong security and compliance posture to support our most data-sensitive customers. As mentioned, security is at the forefront in everything we build and deploy at Acquia, and this is no different for how we interact with and leverage AWS Secrets Manager.
This service has become a cornerstone in supporting our use case and exceeding expectations while meeting our customers' compliance needs. To enable our customers' industry verticals and their compliance requirements, our workloads are active in hundreds of AWS accounts spanning 12 active regions. All of these infrastructures continuously deploy workloads that require secure secrets to operate and perform their necessary functions.
Covering over 300,000 unique secret paths, we average about 3 or more secret types per path, and we generate between 400,000 and 500,000 Kubernetes external secrets references per cluster. As sites scale in and out and our task system executes, this creates a high pod churn rate on our clusters that approach 500 to 1000 pod events daily. This results in hundreds of thousands of AWS Secrets Manager API calls per hour. Let's dig into some of these metrics and see a little bit further.
Looking into a portion of our hourly API usage, we see an average of about 45,000 pods being launched every 20 minutes. Stepping back and looking at this, it's almost 3.2 million ephemeral pod events every day. With this high of a pod churn rate, we need to depend on scalable services and caching patterns to reduce our ever-growing API volume and ensure consistent, predictable behavior. This predictability is evident in our usage, showing a consistent cyclical 60,000 Secrets Manager API calls per hour as seen from CloudWatch.
Having looked at our use case and some usage patterns, let's take a look at our platform's journey and how we partnered with AWS Secrets Manager for our secret storage needs over some other competing solutions. Looking back, our classic infrastructure was a LAMP stack at its core with a customer control plane that was managing entity state. It was orchestrated through configuration services like Puppet, and we had a homegrown task system that configured and scaled each component independently. Deployments were through self-managed services like Gluster clusters for our file system and Percona clusters with Tungsten for replication for MySQL databases.
Each service was stood up independently on its own set of EC2 instances with secrets stored within the control plane that were injected into the customer's sites and configuration at runtime. As we began to modernize our stack and move to Kubernetes leveraging AWS EKS, we also shifted our thinking due to the availability of new managed service offerings like Aurora and EFS. This enabled us to focus more on our customer features and less on the underlying services and their maintenance. This was a huge win, but it came with a new set of challenges as services work amazingly out of the gate.
But when you multi-tenant at our scale, the glue that AWS provides around connectivity, authentication, and authorization becomes difficult to leverage as they are. This is where AWS Secrets Manager stepped in and it brings everything together as a vital part of our ecosystem. Remember how I said that not all secrets are created equal? We recognize this behavioral difference and intentionally guard against it in our APIs, so each application's most sensitive data is stored in a purpose-built fashion.
This ensures that when a user lists their application's custom environment variables, they're not also viewing the backend database credentials or other service connectivity information that the platform provides. To our customer, this is obfuscated away, and only the secret data that they have provided is accessible. The same goes for our platform. Just because it's a trusted platform doesn't mean that we should have full access to all of the data. This guarantees that customers run within their environment and a least privileged access boundary is maintained, so environments only see what is intended.
So how do we make this a reality? By leveraging AWS Secrets Manager and some powerful open source integrations, we created a fronting API service that contains the key business logic managing these types of secrets mentioned, catalogs what the secret is for and how best to group and protect them, and controls the RBAC around each type. As an example, we don't want these services to be able to list or retrieve, and each token is scoped to a specific set of types, so customers can be validated against the key string formatters, and custom rules can be provided like auto rotation or expiry for that type of secret. This ensures that our secrets are classified and filtered and security is grouped on understanding of that type and the use case of that specific secret.
This enables the secure client behavior that we mentioned a minute ago, so customers store their third-party API configuration and environment variables all at one layer. Our service-to-service communication is then at a different level, and we're starting to see a rise of a third class which is actually a mix of both of those for AI agents which need a little bit of the user space and a little bit of the platform space.
Integration with Kubernetes and the Path to AI-Enabled Agents
So having looked at this running API, let's consider the delivery side of things and how the platform behaves. Acquia appreciates AWS's commitment to the open source ecosystem. AWS Secrets Manager has created a first-class secret storage CSI driver. This assists with secrets delivery to the application and into Kubernetes and leverages a very similar aspect to what we use today. This is known as the External Secrets Operator.
This came before the CSI driver but shares a lot of similarities in the patterns and their use. Both read from AWS Secrets Manager's robust APIs and inject into the runtime as a Kubernetes secret. This provides flexibility in the delivery of secrets and provides the runtime an option for injecting environment variables, templated config files, or flags that can go into the pod's runtime command. We encourage anybody that's interested in using Kubernetes on AWS to take a look at the Secrets Store CSI driver or the External Secrets Operator to simplify their usage of Secrets Manager.
So to bring this all together, customers and their internal secrets can be created and provided to secrets through the Product Key Service, which includes a type classification. This service acts as a first-class consumer API and tags those secrets into their varying layers and saves them into Secrets Manager. The External Secrets Operator is able to pick up these secrets and store them into Kubernetes as external secrets objects. Applications interact with these Kubernetes secrets, and the External Secrets Operator keeps these secrets and pods synchronized and up to date as things change in the environment.
So taking a step back, what have we learned from this journey? There's really a need to focus on simple usage interaction patterns. Indirect access is a key to security. Customers' untrusted code is brought to us on a daily basis like AI agents, and you don't want to grant them full open permissions. Isolating workloads and operating in the least privileged fashion is a must.
Structure is critical. Clear naming conventions and hierarchy have to be defined and thought of up front to ensure the varying layers and the way that you pull the secrets can be unique for each type. A well-organized structure is easier to manage, rotate, and audit. For this, the key security and operational sanity can be maintained because you know where things will be.
At scale, API usage is going to have an overhead and this impacts performance but also it's going to have cost implications. So how do we solve this? Understanding usage patterns and implementing smart caching can drastically reduce the number of direct API calls that are being made and can improve your performance overall and help lower your total cost.
This journey highlighted an important reality. IAM is a backbone of AWS security, but it has its limitations. Where IAM fell short, AWS Secrets Manager fills this gap for interconnecting different AWS services. We didn't see this as a replacement, but we saw this as a companion to work with IAM and help us solve our multi-tenant capabilities.
Going back, what's next? So looking forward, we're really excited to have a collaborative partner like AWS Secrets Manager as we dive deeper into the possibilities of what an agentic world looks like. Building from the ground up, we've established an AI gateway which acts as a central interface and sits in front of all their model invocations to providers like AWS Bedrock. This ensures we have a single observability and monitoring point for all our agentic actions.
We've standardized on a set of AI frameworks which has enabled teams to quickly create agents and deploy them in a secure fashion. We call this the Acquia Agent Factory, and it provides teams with a repeatable robust way to focus on functionality and quickly ship agentic capabilities. From here, leveraging these core infrastructure pillars, the AI gateway and the Acquia Agent Factory, we have the ability to interject secure information stored in AWS Secrets Manager for agents to interact with, enabling us to quickly and securely adapt as new AI capabilities, functionality, and use cases arise.
Our close partnership with AWS Secrets Manager has paid off immensely, allowing us to focus on delivering innovative solutions. By integrating Secrets Manager so deeply into our infrastructure, we've streamlined operations and reduced maintenance overhead, which has translated directly into real performance gains and cost savings. Most importantly, it's made our platform secure and resilient for our customers, and to us, this is much more than a vendor relationship. AWS Secrets Manager and the team have been true collaborative partners.
Latest Innovations: EKS Add-ons and Managed External Secrets
We have been excited to work with Acquia as well. I've been part of Secrets Manager for about six years now, and I've been with Acquia for most of it, talking through solutions and paths that we can help solve for Acquia as well as for other customers. I'll end with a few recent launches we've had for Secrets Manager. You heard from Jake about the use of what we call the Secrets Store CSI provider. We launched that open source capability a few years ago, but if you've used it, it does take quite a bit of work. There are challenges to build this and manual work to get done.
When we started looking at how we can make it easy for customers to use these solutions, especially when they use AWS services like Amazon EKS, we found that EKS add-ons provide interesting functionality. You can now launch the AWS Secrets Manager CSI provider as part of an EKS add-on, and it essentially automatically installs and configures everything. All of the heavy lifting that customers had to do for the CSI provider goes away. It is essentially a simple one-click installation that we just launched last week.
As customers start to use more and more container-based deployments, like Acquia with hundreds of thousands of pods, everyone is moving towards more containerized development. Amazon EKS is the most used service from a consumption and compute point of view. This now enables customers to use Secrets Manager behind the scenes while delivering secrets at scale through their container clusters. I was pretty excited about this launch.
One of the things Jake pointed out, and we started this journey about three to four years ago, is that we want to create a situation where when you have an AWS solution or run your workloads in AWS, secret storage should be Secrets Manager by default. We used to call this eliminating any human-visible secrets within AWS. We integrated with all AWS services that manage customer secrets to essentially take away any decision-making customers had to do about where to store secrets, how to make sure they're secure, and how to make sure they're rotated. The one-click rotation for RDS admin secrets, for example, all came from that journey.
We now integrate with fifty-five plus services that manage customer secrets behind the scenes and enable rotation as a one-click functionality, taking away the need for custom Lambda functions that customers had to create. We want to take that further and think about it from an external or non-AWS secret point of view. We launched managed external secrets last week, which essentially enables any third-party secrets. For example, if a customer has a Salesforce secret, before this launch, what you would have to do is create the secret in Secrets Manager and then create a custom Lambda function for your Salesforce secret to get rotated on a regular cadence to meet your compliance expectations. What you can now do with this launch is create a Salesforce Secrets Manager secret, and with a one-click button, behind the scenes the secret gets rotated at the source as well as in Secrets Manager.
This essentially eliminates the heavy lifting and addresses a major concern for customers: if you rotate your secret in Secrets Manager but it doesn't get rotated at the source, you have an availability risk. All of those concerns, especially related to non-AWS secrets which were not solved before this launch, are now addressed. We are really excited about it and will be doing a lightning talk with another customer in the next few days, so you should come see that as well.
This talks a little bit about the details of how you do it. Let me walk you through the idea with one specific customer. For example, there is an ISV that you work with, such as Salesforce. You can work together and determine a specific format that the secret gets stored into Secrets Manager. This journey talks through how, now that a format is created, you can store that secret in Secrets Manager using that workflow.
Once you have that in Secrets Manager, what you get from the partner is a rotation code, module, or function that we execute behind the scenes when you set up the rotation as a configuration within AWS. One of the things to note is that you can go into Secrets Manager and update the rotation schedule. You can make it 90 days, 30 days, or whatever your organizational policies require, and we will take action according to those configurations. You would not have to build any custom Lambda function that you would have to manage.
This workflow talks about storing the secret in Secrets Manager, but it is now a different type of secret called a managed external secret for this specific partner, and it will just rotate the secret. Earlier, you would have to create all those Lambda functions for each partner. For Salesforce or any other partner, you would have hundreds of Lambda functions. Now you have a single way of managing and rotating the secrets.
I will end with key takeaways. AWS Secrets Manager continues to improve as we listen to our customers to enable new workflows, enable easy access, enable caching, enable cost optimizations, and enable integration with third parties as well as AWS services. It ensures that it meets all customer expectations from a secret management lifecycle perspective. Of course, this is a close collaboration with AWS, not just AWS Secrets Manager. I think it is a whole journey, and we talked about Secrets Manager here, but I am sure there are other discussions with Acquia and other AWS services as well.
The last takeaway I want you to take away from this is that third-party or managed rotation is now extended for non-AWS secrets. We have done a fairly good job for AWS secrets, and I want to extend that to third-party secrets as well. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.













































Top comments (0)