🦄 Making great presentations more accessible.
This project enhances multilingual accessibility and discoverability while preserving the original content. Detailed transcriptions and keyframes capture the nuances and technical insights that convey the full value of each session.
Note: A comprehensive list of re:Invent 2025 transcribed articles is available in this Spreadsheet!
Overview
📖 AWS re:Invent 2025 - Balancing Agility and Compliance feat. The Digital Agency of Japan (COP349)
In this video, AWS cloud governance experts and Japan's Digital Agency share how they balanced agility and compliance while managing 6,000+ AWS accounts for 1,700+ local governments. The Digital Agency automated account provisioning, reducing creation time from 5 days to 1 day and eliminating 260 person-hours monthly. Key governance principles include using AWS Organizations and Control Tower for multi-account environments, implementing preventive, proactive, and detective controls, and aligning security frameworks with Japanese law. The agency achieved three objectives: governance through encryption and ISMAP certification, local autonomy by restricting Digital Agency access, and scalability through automation. Technical implementations include My Number Card authentication, IAM Identity Center with two permission sets, Service Catalog's pull-type deployment model, and distributed security notifications via EventBridge. The session concludes with AWS launch updates including account transfers between organizations and AWS Billing Transfer for multi-organization cost visibility.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: Balancing Agility and Compliance in Cloud Governance
So I thought we would start with something a little different, a joke. A senior civil servant was asked a question, how do you define cloud governance? They thought for a moment and responded, it's when we move our servers to the cloud, we also bring along our 500-page policy documents on our flights. It's governance in the cloud.
So we do have a lot to share with you today. We're going to be spotlighting a major public sector organization, Japan's Digital Agency. The agency serves its citizens by spearheading the digital transformation of its national ministries and local governments. My name is Nivass Doray Raj. I'm part of our cloud governance team here at AWS, and we help our customers with building out foundations on AWS. We're joined by Mr. Omoaan, who is the specialist SA manager in Japan, and he leads cloud operations and cloud governance, and our special guest speaker, Mr. Yamamoto-san, who is the Chief Cloud Officer at the Digital Agency of Japan. Thank you.
Okay, here's our agenda. We're going to first talk about what does it mean to balance agility and compliance and why is it important for organizations. Then the core part of this presentation, the governance story of the Digital Agency of Japan. And lastly, we'll round it off with a few key governance launch updates.
Let's talk about customer needs in highly regulated industries. Here are a few key needs that customers in this space have, and usually the customers are in healthcare, public sector, and financial services, but it's really any customer that's dealing with sensitive data sets as well as workloads with strict security and compliance requirements. So if you look at the innovation here, we're discussing teams that are expanding on the cloud, they're building new products, and they need to take advantage of some of our latest tooling, some of the latest AI services. They want to bring the products to market as quickly as possible with generative AI and agentic AI.
It's really time for organizations to rethink existing processes and see how we can build some automation with AI. But if application teams are doing this, then the central operations teams will also have to think about how to expand the cloud environment and how to do it efficiently at scale. Organizational agility simply refers to the ability of an organization to adapt to change as well as deliver outcomes without unnecessary friction, essentially reducing roadblocks for the organization. And of course, our security and compliance teams are looking to make sure that the organization adheres to the security, compliance, regulation, policies, and standards and frameworks.
So we can actually place these needs into two separate areas, agility and compliance. Your development teams are looking for the independence to make technical decisions on their own. They also want the freedom to experiment and build products. On the other side, your central operations teams are looking to see how they can build efficiency and standardization at scale across the entire organization. And then of course, your security teams are looking to implement controls to make sure that the organization is following regulatory and compliance requirements.
So there's a tension here. It almost seems like this is moving in opposing directions.
And so the key insight here is that this isn't an either-or proposition. It's actually a balanced approach. We need both agility and compliance within an organization. You will see that guardrails replace the roadblocks. We're simply defining clear boundaries where teams can operate. Automation enables compliance, and the last one is a shared security model. Everybody is responsible for security and compliance.
So we talked a little about cloud governance at the beginning, and that was the incorrect definition, but what's the real definition? Here at AWS we actually have a formal definition for cloud governance. It's a set of rules, processes, and reports that guide your organization to follow best practices, essentially building your foundation in alignment with the business requirements of the organization.
Building Foundations: AWS Organizations and Control Tower for Multi-Account Environments
So it's great to have a definition, but where do we begin? Oftentimes this starts by building an environment on AWS. A great analogy to think of it is a house. The same design principles when you're building a house, determining where the rooms go, the access points, the doors, are very similar to how you'll be building an AWS environment.
We build our environment on AWS using two foundational services: AWS Organizations and AWS Control Tower. This diagram here is an architectural diagram of a multi-account environment. It's what AWS Control Tower deploys for you. We're highlighting it here because it follows the Well-Architected principles in building your foundations, and we'll talk about a few of them.
A landing zone simply means a secure, scalable multi-account environment. If you look at this diagram here, notice the multiple accounts. This is actually a very key principle on how we design our services. We use multiple accounts because it helps us operate efficiently, it helps us manage our teams, and at the same time it reduces risk, security risk, reducing the security blast radius.
If you look at this diagram, notice that it sets up Identity Center. One of the very first steps that a lot of our customers take is that they set up identity access to the multiple accounts. In this case, Identity Center is integrated or can be integrated with Okta or any kind of identity federation provider.
Next is the log archive account. This is based on the principle that we want to track activity that's taking place across your environment. In this case, Control Tower is centralizing CloudTrail logs across its environment, and it's also including AWS Config logs. So we're tracking the API activity as well as the resource configuration changes that are important for compliance as well as operational investigations.
Next is the audit account. You can also think of it as the security account. Here lies all of our security administrative tools, and in Control Tower's case, it sets up the AWS Config aggregator. We'll hear a little bit more about Config in a future slide, but the takeaway here is that you can receive security notifications on non-compliant resources across your environment.
Now, provisioning accounts. I want to be clear that when we create accounts, it's quite simple to create an account, but when you provision an account, there's actually a lot more steps that take place in provisioning an account. We're actually customizing the account so that it can be used by other teams. Typically what happens is customers will set up the proper networking resources, identity, security, lots of administrative tools, and in some cases they may also set up, depending on who is using those accounts, domain-specific tools.
These could be SageMaker or database tools and so on. That's what it means to provision accounts. We provide some customization frameworks to do this, but it's important to understand that sometimes provisioning accounts can get complicated and can take some time. It's different from account creation.
Lastly, there's a QR code at the edge that guides you on how to create some organizational units within your environment. You don't have to create every single organizational unit, but the principle to take away here is that you should create organizational units based on common policy groupings. So this summarizes our environment best practices. Use accounts as building blocks in the multi-account environment, automate the account provisioning and customization, track activity across your environment, and implement a strong identity foundation.
Establishing Controls: Preventive, Proactive, and Detective Governance Mechanisms
Okay, so now let's talk about landing zone governance. We talked about the landing zone, we talked about centralizing identity, access, and logging, and automating account provisioning. But there's something that's missing, something very important for security and compliance, and that's establishing controls. So there are three types of governance controls. The first is preventive controls, and here we're simply making sure that users and roles are restricted from performing actions on resources, and this applies across the CLI and console.
Then with proactive controls, we're scanning the resources before they are provisioned, and this can take place with infrastructure as code and sometimes developer pipelines. The benefit of proactive controls is that this leads to development efficiency along the same lines of shifting left principles, but we're catching the errors before they get provisioned or deployed. And then the last advantage of proactive controls is also about cost efficiency. We're preventing the resources from being deployed, so that results in lower costs.
And then the last one is detective controls. These are simply non-blocking rules, but they will evaluate certain resources and tell you what's compliant or non-compliant. So AWS Organizations has three preventive controls, two of them launched last year: Resource Control Policies and Declarative Policies. With Service Control Policies, we are preventing or restricting identities based on API actions, and I'm sure everyone's quite familiar with them. They've been around for several years, and you can apply them to all of our AWS resources, containers, EC2, SageMaker, and so on.
Now, Resource Control Policies define the maximum available permissions on a resource. So we typically see this being used for data perimeter use cases, and it's very helpful when we are able to apply what permissions can be enabled on a service at scale across a multi-account organization. And so the services that it supports today are S3, KMS, and Secrets Manager. But it's very powerful when you don't have to think about what permissions should be allotted to each and every resource in your organization.
And then Declarative Policies are also very powerful here. They enforce a desired configuration for a resource, and those resources include VPC, EC2, and EBS. But again, the power here is that this is applied at scale. Now AWS Config, these are our detective controls. I think the main points I'd like you to take away from here is we can group config rules into conformance packs, and that makes it easier to deploy across the organization, maintain the config rules, as well as view them centrally in a dashboard.
So that's the benefit of using conformance packs. Config also supports aggregators. In Control Tower, you'll notice that we had set up a Config aggregator in the security account, but you can also create custom Config aggregators for your own accounts or regions. Then there are custom rules. You can build custom rules depending on what your business requirements are, and almost every single customer will want to create custom rules. The easiest way to create custom rules today is using Curo or Amazon Q Developer. It used to be a lot more complicated. We did have some frameworks earlier, but now it's far easier to create it with some of our generative AI tooling.
So the question is, should we start by building custom rules? You certainly could, but we would recommend that you also take a look at our managed controls. We have a number of managed controls across our services. AWS Control Tower has over 1,000 control types as well as 17 frameworks. You can search through these common controls and understand how to apply them at scale across your multi-account environment, and that's a big benefit of using Control Tower controls. It doesn't just apply to one account. It applies to all of the accounts that are under Control Tower governance.
We also have Security Hub CSPM, and this checks to confirm if an AWS environment is following security best practices. It centralizes the findings across all of your accounts, and you can also integrate it with third-party security tools. Here we have two separate AWS services with managed controls. The question that often our customers ask is, how do we know which service is applying which control and why are these controls being used? That's why we have the Controls Catalog, which is also part of AWS Control Tower today. It lets you search and list out your controls across your environment, understand the control objectives, for example, whether we are using it for a control for data encryption. It lets you search which kinds of controls are applied to different services and so on. You don't even have to enable AWS Control Tower to search through the catalog. You can simply visit the catalog and list out the managed controls there.
Here are the controls best practices. Align your control objectives to a security framework. Apply the detective controls before the preventive controls, and this is a very important one because far too often we see that customers are impacted by applying preventive controls. While it might seem like there will be no impact, it has impacted production workloads. So the guidance here is try as much as possible to apply detective controls, understand what is non-compliant before applying preventive controls. And then, of course, the last one is continuously monitor and test your controls. Your development teams are always building using new tools. We want our controls to adapt to what your teams are doing in the organization.
Japan's Digital Agency: Governing 6,000+ Accounts While Enabling Scalability
So the question is, I talked a lot about AWS cloud governance principles and we covered a little bit of information on services, but how does this look like in a real world scenario? How do you implement governance defined by law, and where does a large organization even begin? The Digital Agency of Japan supports 1,700+ local governments and has an environment of 6,000+ accounts. Here's a short video introducing the agency.
You can't see digital technology, but it's always there, right beside you. And there are things only digital can make possible. With digital, you can get the certificates you need without worrying about having enough time. Payments are made simpler and easier by using your My Number card. At school, everyone can learn at their own pace using personalized textbooks or digital tools. Your identity can be verified securely at events. Even when you're busy caring for your child, you can submit applications without visiting public office. Even if a patient cannot speak, their medical history and prescriptions can be shared immediately.
Anytime, anywhere, the right service for the right person at the right moment. By providing personalized support for all, we strive to build an inclusive society accessible to everyone. Digitalization for everyone in everyday life. Digital Agency.
Hi everyone, can you hear me? Good. I'm Nori Yamamoto, Chief Cloud Officer of Digital Agency in Japan. I'd like to talk about how we achieved both scalability and governance in our government cloud project in Japan. First of all, let me introduce Digital Agency itself.
Digital Agency was founded in September 2021, just after the COVID-19 worldwide shock. During COVID-19, we Japanese noticed that we could not trace who was vaccinated or not, where the vaccines were available, or in short supply, because of the paper and manual operations. At that time, one of the ministers referred to it as a digital defeat. To solve this situation, Digital Agency was founded and has been driving the digital transformation of the government over the past four years.
We achieved many things. For example, the My Number card. You saw the My Number card in the videos. My Number card is a national ID card in Japan, which is possessed by 100 million people, almost 80% of the population in Japan. Recently, some personal information like medical information or tax information, this information is integrated into the My Number card and My Number portal. So we can access digital applications by using the My Number card.
The government cloud is the baseline for this digital transformation and digital applications. Many systems start using the government cloud as their common platform for digital transformation, not only the central government, ministries and agencies, but also the local governments. We actually have 1,700 local governments and quasi-public organizations. Now, we have more than 6,000 AWS accounts, and 300 to 400 accounts are created every month recently. It's going very fast.
To create 300 AWS accounts in one month, the account provisioning must be automated. We developed such kind of automation, not me exactly, but Sato-san, the courts, and other members developed it. Before the automation, the users were in the queue for account creation and had to wait for that completion for five working days. But after the automation, the users can start using the cloud the next day. The reduction in our administrative tasks has been very significant.
Before the automation, creating accounts took 30 minutes, and some preparatory tasks related to several accounts took one hour or so. Creating 300 AWS accounts in one month used to take 260 person hours, but after the automation we reduced it to zero. That's the result of our scalability mechanism we developed.
Before explaining our scalable mechanisms, we government officials must emphasize that governance is also very important. For the local governments, the independence of the local governments, local autonomy, is also very crucial. The governance, the local autonomy, and scalability are kinds of contradictions, I think. The government cloud must be the balanced platform that resolves the conflict between the governance, local autonomy, and scalability, enabling all these to be achieved. So again, the governance, the local autonomy, and scalability are government objectives.
Achieving Three Objectives: Governance, Local Autonomy, and Agility in Government Cloud
I'll explain three objectives and how we achieved three objectives by balancing all of them. But before explaining three objectives, let me explain the system overview of the government cloud. The users can access their own cloud environment directly. The management functions and the logs functions are set up aside the cloud environment, the user's cloud environment, by using the cloud services like Control Tower or Security Hub as Nbus mentioned. And then the GCAS, Government Cloud Assistant Services, which is the web services we developed, that's the so-called vending machine, and also the visualizing dashboard or payment management systems. Those are developed outside the management functions of the government cloud by connecting loosely.
To achieve the agility and scalability, we have to avoid creating the bottlenecks like wrapping API or multi-cloud integrated management systems on top of the clouds. Instead, we enable users to leverage the cloud technology directly. The users can use the government clouds just as they would the public cloud.
Then the first objective, the governance. I'll explain two aspects of the governance. One is institutional aspect, and the other is the technical aspect. The first one is institutional aspect. Data are encrypted and dealt under the direct contract between Japanese government and the CSPs. The direct contract is very important for claiming the sovereign immunity. The direct contract is under the Japanese law.
Storing data domestically is very important for ensuring that laws take effect directly. Storing the data domestically and dealing with them appropriately to ensure those kind of things, the audit and the cloud certification called ISMAP in Japan, those are very important. To summarize, data are managed domestically, and it's under the direct contract based on the Japanese law. We can ensure those are operated appropriately according to the audit and certification. That's how we achieved the institutional aspect of the governance.
And second, the technical aspects of the governance. Data are encrypted by the user's own encryption key. The encryption key is stored in HSM, Hardware Security Module, which is certified as the FIPS 140. The encryption keys are authorized to be managed by the key owner in the key management services.
To access the encryption key, the key owner is strictly identified by My Number Card and is strongly authenticated by MFA devices. Nobody can access the data directly bypassing encryption, and technically, the data is fully controlled by the key owner only. Therefore, the data are governed both institutionally through contract and law and technically through encryption and strict key management. This kind of governance approach doesn't impact the scaling because there are no bottlenecks for the user applications except encryption. The users just encrypt their data.
The second objective is local autonomy. As I mentioned before, the local governments emphasize local autonomy and their independence. Only predefined automated processes can access the environment, and even Digital Agency employees must not have access to the environment of the local government. When Digital Agency employees access management accounts and management functions, predefined procedures are required. Those kinds of procedures and operations are reviewed by the auditor every year. The users' data are encrypted, and even the Digital Agency employees cannot access the data directly bypassing encryption. The users can control the data in an independent environment from the CSP and Digital Agency. This kind of local autonomy approach doesn't impact the scaling either.
Scalability Through Automation and Multiple Cloud Strategy
The third objective is agility and scalability. As I mentioned before, agility and scalability are almost realized by cloud technology like AWS Control Tower. However, it's still necessary to map the real structure to AWS Organizations because the real structure, the organizational structure and power structure which defines who is responsible for what, differs from one organization to another. The AWS Organizations service is not enough to represent the real organizational structure. Rather than that, I think AWS Organizations must be designed according to AWS best practices, like Service Control Policies and so on. We need to develop the mapping functions between the real structure and AWS Organizations, and we have to develop the management system of mapping information.
We developed automated processes during the provisioning, which resulted in our scalability mechanisms. As I mentioned before, this led to waiting time reduction and task reduction. We developed these three objectives over five CSPs we contracted. Multiple cloud strategy is very important, I think. We defined some rules as our multiple cloud strategy. For example, the first one is one CSP for one system. Do not split a single cohesive system into multiple clouds. Even if they would like to build a redundant system across the clouds, they can build the redundant system even in one CSP by using multiple regions and multiple availability zones. If they split the system, one system into multiple clouds, it leads to cost increase and complex operations, I think.
The second one is similar to the first one. It's unnecessary for integrated management systems. I think one operational organization, one operational team should deal with one CSP.
This is because when they want to deal with two clouds, they have to acquire double the skills and pile up double the knowledge. This leads to cost increases and complex operations. The third thing is data portability. Data portability is also important, of course, and the data and programs related to the core applications should be exportable. Data and program portability is very important, so for example, container technology rather than Lambda function as a service would be preferred, especially for the core applications.
In the final deck, we are planning additional services around the existing ones to assist the total cloud usage from the first registration to the operations. For example, we will release Git services, which we call GCAS DevStack, in April 2026. Regarding AI, we have prepared AI environments where users can utilize AI services. Japan has announced a government policy to actively utilize AI and will prepare for government AI. The AI services in Government Cloud will be the platform and infrastructure for government AI, so we are continuously contributing to it. Those are how we achieved the three objectives of Government Cloud and how we achieved both scalability and governance in our Government Cloud projects. I will now introduce Omura-san, who will explain the technical details in more depth. Omura-san, please.
Technical Deep Dive: Implementing Best Practices for Identity, Provisioning, and Monitoring
Thank you very much, Yamamoto-san, for sharing valuable experience at Government Cloud in Japan. Hi, I'm Yukitaka Omura, Manager of Specialist Solution Architects in Japan. I worked with Digital Agency to build the agility and governance in Government Cloud. As we discussed, Digital Agency's case, a key point of their success was the implementation of best practices for cloud governance. AWS has already shared several best practices with us, and I'll pick up four best practices and dive deep into these best practices. Let's learn the lessons from Digital Agency.
The first best practice is to align control objectives to a security framework. How did we achieve this in Digital Agency? For governance, as Yamamoto-san explained, we start with the law and finally it's implemented at the end. Digital Agency aligns with a principle-based approach, so high-level documents like laws and regulations define the guidelines, and each organization and system needs to determine and define their own controls following that guidance.
As you can see on top, in Japan we have a law called the Basic Act on Cybersecurity. For the regulation, the National Cybersecurity Office defined Unified Standards for Information Security Measures for Government Agencies. Digital Agency created a guideline, the Security by Design Guideline for Government Information Systems. These guidelines are not specific enough, so they reference frameworks like NIST Cybersecurity Framework and control catalogs like NIST Special Publication 800-53. With this framework and control catalog, they mapped their guidelines to implementation.
And here is another control objective. Ensure proper security measures without disrupting operational efficiency. To achieve this, they use two types of controls. First one is preventative control. They are using AWS Organizations Service Control Policy, but they use this minimally because they don't want to limit the developers' functionality. For example, prevent users from creating IAM user and access keys. That's a preventative control. And they mainly rely on detective controls. They are using AWS Security Hub CSPM. For the detective controls, for example, AWS Config should be enabled, and so on. To implement that, they're using AWS Security Hub CSPM standards like CIS Benchmark and AWS Foundational Security Best Practices. Security Hub already has managed controls and mapping to frameworks, so they can easily implement control objectives. This is how they created the controls.
Moving to the next topic. The second best practice is to implement a strong identity foundation. As Yamamoto-san mentioned, they're managing several thousand users, local governments and their users. So automation and scalability are the keys. The first challenge is the five days to create users. That's because they have a manual user authentication process. So they created an automated authentication system to use My Number Card. As Yamamoto-san said, it's Japan's national ID card. So they implemented it on GCAS. GCAS is Government Cloud Assistant Service that they created, and on top of that, they implemented that user authentication system. With this automated system, they reduced the lead time of user creation from five days to just one hour and also reduced the workload of the Digital Agency.
The second challenge is thousands of users on IAM Identity Center. There is a limitation on the number of permission sets in IAM Identity Center. That's three thousand five hundred. To scale that, they created just two permission sets: Administrator and Non-Administrator. For administrator, they associate it with the administrator permission set. With that permission set, when a user creates the account, the administrator role will be created. With this role, they can create another role. They don't use IAM users because the Digital Agency does not permit creating IAM users. So instead of that, they create roles on the guest account. To use such a role, they use the non-administrator role. The non-administrator user is associated with the permission set for switch role only. With this role, non-administrator users can switch to other roles like developer roles and operator roles. With this simple model, they can easily scale user management and keep it simple to manage users.
The third best practice is automate account provisioning and customization. Yamamoto-san said they create three hundred seventy accounts per month, so they need to automate everything.
They are using multiple cloud service providers. When a user wants to create an account, they access GCAS first. From GCAS, it kicks off the Step Functions workflow, sends the request, and stores the request to SQS. Then a Lambda function creates the account with Control Tower. This SQS is needed to limit the account creation in parallel because Control Tower limits account creation to five at once.
Then they call several APIs with Lambda functions, for example, to subscribe to Enterprise Support and so on. As you may know, Control Tower has Account Factory Customization with infrastructure as code like CloudFormation. However, infrastructure as code is suitable for defining the status, so instead of that, to define the procedure, Lambda functions are more suitable. Especially, subscribing to Enterprise Support is not supported by CloudFormation, so in such cases, Lambda functions are suitable.
They also have another database on GCAS. This is because this is the real world. When a user wants to create an account, the Digital Agency needs to have some real-world information like local government name or email address for billing. To store this data on GCAS, they don't need to have a manual process like writing some information on a note or something. Like this, the agency created a fully automated mechanism for account creation. This is the automated one-time account setup.
For the resources defined by infrastructure as code that require changes in the future, the Digital Agency uses a pull-type deployment model with Service Catalog. What is pull-type deployment? They defined a secure baseline, for example, to check and remove security groups and so on. This is defined by CDK, and they created Service Catalog products from that. When local government administrators want to deploy it, they pull that product and deploy it on their account by themselves. Then they create their own systems.
This pull-type deployment, compared to push-style deployment, is like CloudFormation StackSets and so on. However, if the Digital Agency uses CloudFormation StackSets, there are two problems. The first one is that the Digital Agency is not permitted to access the guest account because of local autonomy. This is one problem. The second problem is that if the Digital Agency deploys this baseline by CloudFormation StackSets, they need to coordinate with the local government administrators, for example, when the template will be deployed or what the current status of that secure baseline is. So this pull-type deployment model enabled them to deploy the secure baseline without any coordination by local government administrators.
If a digital agency wants to update the secure baseline, it's very simple. The digital agency should just update the service catalog product. Then a local government administrator pulls that new version of the product and deploys it by themselves. There is no coordination between the digital agency and local government administrator. This is a pool-style deployment model.
The last best practice is continuous monitoring and testing of controls. Don't set and forget. In the government cloud, local government administrators have the responsibility to fix security issues. However, digital agency administrators usually don't send security issues to local government administrators. So how do they notify them? In the secure baseline, the digital agency has already defined the EventBridge rules and the AWS Chatbot. With these services, detected security issues are sent to the local government administrator directly. And they fix those security issues by themselves. These distributed notification systems enable the digital agency not to handle a massive number of alerts because they have thousands of accounts. This also enables them to align with local autonomy.
However, as an administrator, the digital agency needs to understand the security posture of the government cloud. To do that, they get periodic reports from Security Hub and check critical events only. Once they find a critical event, they notify the local government administrator. This is how they implement and manage security postures at scale. This concludes AWS cloud governance best practices that the digital agency has addressed in detail. So I'll hand it back to Nivas to wrap up this presentation. Thank you.
Key Takeaways and AWS Governance Launch Updates
Thank you. So you heard from Mr. Yamamoto and Mr. Omura on how the digital agency achieved a real-world implementation of cloud governance and had real impact. And this is just a summary of the best practices, so feel free to take a picture. We covered it all in our sessions. And now the launch updates. We had several, and we just decided to point out a few of the key ones. Namely, transfer accounts. A lot of organizations are moving accounts between different organizations for mergers and acquisitions or just different business needs. In the past, we had to make an account stand alone, add credit card information, and then move it to the second organization. Now you can simply invite the account over to the target organization, so this makes it much more efficient.
The second launch is AWS Billing Transfer. While we still recommend that organizations use a single organization, sometimes there are cases where companies and customers have to use multiple organizations due to regulatory compliance or actual business requirements. An example would be partners that are supporting multiple customers. In this case, in the past, administrators had to log into each and every management account of the different organizations and review the cost, the cost analysis, and invoicing. Now you can centralize that. Now you have a centralized view of what's taken across multiple organizations. And so this helps with invoice collection, payment processing, and cost analysis.
Something important to note here about AWS Organizations is that even though the administrator can review across different organizations, each organization is still responsible for its own security. This is just sending costs over and centralizing it to provide cost visibility, but the security management is still separate.
Regarding controls only, we talked about the controls catalog and we talked about AWS Control Tower. Now you can have a controls only experience with AWS Control Tower. You don't necessarily need the landing zone components, and so this will be very helpful for customers that have already built out a custom landing zone. They can simply take advantage of the controls in Control Tower, which include Service Control Policies, Resource Control Policies, as well as AWS Config rules to apply at scale.
As next steps, I'm just sharing some resources. I did not share all of our launches, but this QR code summarizes a list of our cloud governance and other launches. The key takeaway is that cloud governance gives you the right foundation to experiment and execute. We saw this clearly with the governance story of the Digital Agency of Japan. Thank you very much for the engagement and support, and special thanks to Mr. Yamamoto. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.






















































































Top comments (0)