🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - How Cathay Transformed DevSecOps with AI: A 75% Faster Security Story (SEC202)
In this video, Mark Arel from AWS Professional Services, Naresh Sharma, and Tony Leung from Cathay share their journey of transforming from DevOps to DevSecOps using Agentic AI. They discuss how Cathay faced challenges with 78% false positives in vulnerability scans, wasting thousands of hours across 1,300 applications and 2,500 microservices. The team implemented shift-left security practices, established a Security Champions program with 58 certified champions, and developed an Agentic AI solution that reduced false positive review time from 30 to 13 days—a 60% improvement. They achieved 120% of their remediation target, clearing critical and high vulnerabilities across 50% of applications, and reduced detection time by 75% while increasing security awareness by 70% among developers.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: Cathay's Journey from DevOps to DevSecOps and the Challenge of False Positives
Good afternoon everyone, and thank you for joining us today. I am Mark Arel. I run the AWS Professional Services practice in Hong Kong, and I'm really excited to be sharing a story with one of my favorite customers, Cathay. We've gone on an incredible journey to transition from a DevOps to a DevSecOps practice using Agentic AI. Matt shared some insights about security agents in his presentation this morning, so I'm really excited to be here with Naresh Sharma and Tony Leung to walk through this experience. We're going to talk about the challenges that were faced, the solution that was implemented, the journey we took to achieve benefits and results, and most importantly, the lessons learned and what's next.
With that, I'd like to introduce Naresh. Thank you, Mark. Hello, everyone. I want to share that close to 78% of vulnerabilities identified by our scanning tools were false positives. This led to thousands of hours wasted for many teams and created issues with fixing and identifying actual vulnerabilities. It was really slowing down our innovation. My name is Naresh Sharma, and I'm here from Cathay to share with you an amazing journey we had by overcoming these challenges.
Before I go into those details, I would like to share with you that this is Nikki, one of our initial first aircraft in 1946. It has been proudly displayed outside Cathay headquarters in Hong Kong. Interestingly, its sister aircraft called Betsy is actually at the Hong Kong Science Museum. Cathay is more than just an airline. It is the memories for most of us of our first flight. It is also a cultural connection for millions of people in Hong Kong, Asia, and worldwide.
As we reach 80 years of service, we are now serving close to 100 destinations worldwide with 230 aircraft and counting. We're also very proud to be a founding member of the Oneworld Alliance. This year we were judged among the top three airlines and best airlines in the world, with accolades including best in economy class and best in-flight entertainment. For those who don't know, Cathay is a very dynamic and complex organization. On a daily basis, we fly 70,000 passengers and help them reach their destination safely, comfortably, and happily.
Behind all these intricate and large operations are thousands of our employees who are working day in and day out to ensure great customer service and work with customers at every touchpoint. While they are doing these activities, IT is actually at the center of everything. With technology, we are facilitating and supporting our thousands of employees to ensure that they can deliver the services they want to give to our customers and at the same time have seamless processes and meet their goals and vision.
IT and digital at Cathay has been at the forefront in supporting the vast technologies that we have. We support close to 1,300 applications. We also have close to 2,500 microservices.
We also have close to 1,000 developers spread across 7 offshore development centers. Having this large-scale IT infrastructure—and by the way, this is just about the application; I'm talking about the systems behind that—those have downstream and upstream, east-west, north-south connections. I think we have a lot over there. But as we have progressively innovated all these years and uplifted our application, there are always technical debts and certain challenges that will actually impact your go-to-market.
Critical Challenges: Time to Market, False Positives, and the Cost of Late-Stage Security Testing
Challenges such as time to market. I'm sure you must have heard this term used a lot in the early 80s and 90s, but of late in IT, it has been used a lot in many senses. The reason being is that time to market is actually a differentiation that you can have in the services you give to customers, which differentiates you from your competitors. But time to market is just a function—it's a function of all the challenges I have shown here: functions of false positives, project timeline impacts, roles and responsibility if not clear, and then escalating costs. If you're having multiple tools, of course, not each tool will work with one another or integrate. On top of it is the agility of the business. So time to market does get impacted by all these factors.
Before we move to the next few slides, I would like to share that when you're doing this evaluation, it is very important because first, we need to understand where we are standing. So we have to do certain assessments in the organization to say where we are standing and how we want to progress from there. You may be having a lot of strengths and some areas of improvement. There are also promoters to the systems and technologies, and there are detractors to the systems and technologies. When you do these kinds of assessments, it actually identifies the areas that you want to address in the next 6 months, 12 months, and 18 months timeline and so forth.
While the degree of these parameters may go up and down depending on the maturity of those processes in the organization, there is one more important thing that actually helps you give the right information in such cases, and that is the culture of the organization. When you're doing this self-assessment, it should be done without any concern about what people will think about our department or what other departments will think. It is very important that when such assessments are done, they are done with wholehearted and right information. Only then will we have the right steps that you need to take to work on that.
The next few slides that I'll be sharing, I'm pretty sure many of you will resonate with, and maybe the degree of impact would be varying, but definitely these things you must have seen or heard in the organization a lot. False positives, as I earlier mentioned, 78% of the vulnerabilities were actually false positives. What happens when you have false positive vulnerabilities? The team still needs to validate. It's not easy. The reason why you come to the conclusion that it's a false positive is because initially, somebody has done a validation and addressed it to be a false positive.
It's a lot of manual and resource-intensive work because lacking automation capabilities means one has to validate those false positives and also fix the ones which are true positives. So it is very manual in terms of automation capabilities, and the findings that we were having were very raw, and it was a lot of excessive hours that we were spending on the assessment of the results. Roles and responsibilities—while every subunit within technology were aware of what they have to do, but when working together, it was quite a challenge when you don't have the right roles and responsibility and accountability in terms of remediation efforts. Otherwise, there will be a lot of finger pointing.
Time to market—I think we already covered that. I won't go much deeper into that, but yes, it is a factor.
Another important aspect is the need to identify, consolidate, and prioritize vulnerabilities. There must be a certain process in place to identify and understand what vulnerabilities exist, consolidate them so they can be fixed in the right timeframe. However, this was one of the challenges we were facing. Multiple tools exist, but if there are no right processes and no right information coming out of them, then they are just sitting as white elephants and merely providing information. So it was critical for us to identify these things and move to the next chapter.
When considering where we were in terms of the software development life cycle, it is quite important from a risk perspective to mitigate those risks. However, how do we mitigate them? How do we identify what is a risk when there are false positives and other situations involved? It is very difficult to mitigate the risk. We need to ensure security hygiene. When security comes into play at the later stage of development, there are only two outcomes. If you find many vulnerabilities that are true positives, you are either going to delay your initiative or you are going to accept those risks and move forward. In both cases, this is not the right approach. Having a design with a security team and having architects think about design from the conception stage is very important.
Finally, if the security team and the developers are working in different zones where developers are building at one end and then the output comes through and we are doing security testing, it will definitely create a lot of overhead for the organization. So we had to see how to work in tandem. This is typical of DevOps and Agile. Many companies say they have agility in their DevOps with all these phases and ensure that coding practices are in place. But what happens when we are going to launch? That is when the security team comes into the picture and says, "Wait, let me do my testing." When they are doing the test, we identify tons of proven vulnerabilities. We all know what happens then, right? There is a lot of tug of war. The business wants to go to market as soon as possible and launch their products quickly, but we have to take a risk-based decision. Sometimes it can be very risky when you are launching a product which has security deficiencies.
When it comes to security testing, any vulnerability identified after development and UAT and other areas have been completed will take at least 18 times the cost to fix. When we do the launch, if vulnerabilities are identified at launch, it will be close to 64 times the cost of fixing those vulnerabilities. As you can notice, the focus should be on how we change the cultural mindset of the organization to have security as early as possible in the cycle and go forward from there. So with that, I will hand over to my colleague Tony Leung, who will take you further on this journey.
Shifting Left: Transforming to DevSecOps with Embedded Security in the SDLC
Thank you, Naresh, and good afternoon everyone. After Naresh shared how impactful waterfall security scanning was, you may have some ideas about the pinpoint of ourselves. We need to migrate from the existing waterfall security testing to something else that can help us detect and fix errors at an earlier stage. So we have to shift our testing to the left of our timeline, which means we have to have earlier testing. Throughout the whole cycle we have to test everything as early as possible to better improve quality, efficiency, and lower cost and risk.
I think you know what I am going to say. Yes, we have to transform from DevOps to DevSecOps. There are some key changes we have to embrace. First of all, we have to have shift left, which means we have to embed all the security compliance and practices into our SDLC, the software development life cycle. Not just at the beginning or the end, we have to do it continuously. That way we can detect errors at an early stage and fix them as early as possible. You might say that if we shift left all the security testing, some people will have concerns and questions about how we accomplish this.
For example, the testing team is concerned about resources because they need to do more testing continuously. Also, what about the application team? They also have concerns about whether they have the skill set to tackle those security issues or how to prioritize security against other application requirements. Which one should they do first? These questions are actually what we are addressing in these key changes. I will cover them in the second and third points.
Let me come to the second point and talk about the process. As I mentioned, shift left means we need to make sure that all the testing and scanning must be triggered and executed in an automatic way. Luckily, we have a very robust DevOps CI/CD pipeline, and we can easily orchestrate all those testing and scanning into our DevOps pipeline. Also, most of our applications are now using that CI/CD, which means this is the most natural and easier way for us to integrate. We can also maintain compliance by enabling blocking mode. Blocking mode is a guardrail for our applications to safeguard our deployment. If there are any high and critical vulnerability files, the deployment will stop and the vulnerable release will not reach our customers.
On the people's side, as mentioned by Naresh, waterfall security testing always comes with some last-minute surprises. The application team and business units did not understand why our security team was always blocking them and preventing the department from going to the customer. Actually, our security team is doing a good job protecting our company from any revenue loss or reputation loss due to security incidents. There is some misunderstanding that security is an obstacle to innovation and value creation. We must have a culture change to change our people's mindset, especially for the development community.
Let me ask a simple question to all of you. Imagine you could be a developer for one day. There are two options of work you can do. One is new application development. The second is fixing aging vulnerabilities. Which one would you choose? For those who want to fix aging vulnerabilities, please raise your hand. Thank you. Only a few of us would want to do the vulnerability work. I assume the rest of you would want to do application development. This is expected and normal human behavior. Everyone wants to do new things rather than fixing old things, right?
On the people's side, talking about visibility and observability, we want to improve the application security posture. At least we want the application owner to know what the current application security posture is. A very insightful dashboard is also important so the application team can have clear targets to aim for day in and day out. They will no longer have last-minute surprises because they all know their application security posture now.
With the help and blessing from our senior management and support from AWS ProServe, we have successfully achieved all these key changes. Now we can say that we have transformed from DevOps to DevSecOps. We can also have earlier detection of all those vulnerabilities. You can see that from this different lifecycle, we have different activities embedded in our cycle. For example, we have SAST for source code scanning. We have IAST for interactive testing, SCA for third-party library scanning, and also we have a dashboard to review all those outstanding vulnerabilities.
The Remaining Bottleneck: Manual False Positive Review and the Agentic AI Solution
Now, compared with the previous traditional way, the cost of fixing vulnerabilities is significantly reduced. We can say that we have successfully transformed to DevSecOps and we can detect errors earlier. However, do you think everyone was happy? Do you think we are really able to achieve time to market, better quality, better efficiency, lower cost, and lower risk? I think the answer is no. We are just one step away from success. I am going to share what new challenge is holding us back. Actually, from different reasons, from the security scanning tools, there are many noises inside the tools, especially false positives. This is one of the things that requires a lot of human effort to review and follow up.
Let me explain a little bit more here. Whenever there is a potential false positive, we have to ask the security expert to review it and then evaluate whether this is a true positive or false positive. Once we confirm it is a false positive, we ask the tools owner to do the suppression.
This suppression is performed by either updating the tool's configuration or going back to the portal vendor to request their fix on the portal side. However, this process is complicated by the fact that it applies to only one application and one vulnerability. The same vulnerability can occur in other applications, but there is no way to synchronize the suppression across different applications for the same tool. This means we have to suppress the false positive one by one for each application. Additionally, there is no synchronization across different tools, so we have to repeat the entire process for each tool. This creates a significant amount of human effort for false positive review.
In our case, we need to review approximately 500 exemptions every month. We also need to spend about 48 human days per month reviewing those vulnerabilities and suppression configurations. This represents a huge effort for us, and we recognize that it is a burden on our development life cycle. We knew we had to change our approach, otherwise we could no longer achieve our time-to-market objectives in the near future.
We are excited to announce that we have developed an Agentic AI to help us rebuild our false positive detection and suppression processes. I will go over the high-level diagram of this Agentic AI, and Mark will go over the details later in the presentation. First, our developers will go to our Application Security Portal and interact with our AI agents. They can ask questions about any security recommendations, the existing application posture, and request false positive exemptions. The AI agents will learn from our knowledge base, which includes all approved exemptions, all false positive patterns, our company policies, and anti-patterns. Once the decision is made, the Agentic AI will interact with different security tools to perform the actual suppression and configuration to remove those false alarms.
With the help of this Agentic AI, we successfully reduced our average time for reviewing false positives from 30 days to 13 days, which represents approximately a 60 percent improvement. At this point, you may want to know about the journey of our DevSecOps transformation and the development of our Agentic AI. Let me pass the time to Mark to discuss this. Thank you, Tony. This was indeed a journey. Going from having 78 percent of vulnerabilities being false positives and having to go through extensive research was truly a significant undertaking. We have completed many milestones over the last two to three years along the way.
A Multi-Year Transformation Journey: From Assessment to Remediation and AI Implementation
The first and biggest thing, as Naresh had mentioned, is our mentality to start small, iterate, and cycle through. Cathay had been on AWS for over seven years when we first got engaged. The request was that they were continuing to grow and expanding into other areas, and they wanted to improve their game from a cloud operations and engagement perspective. We came in and conducted a Cloud Center of Excellence review, which led to a cloud operations assessment, and then into a DevOps assessment. We put in effort and completed this at the end of 2022. With that foundation set up, we kicked off in 2023. The first part of the engagement was to do a deeper dive following those assessments, getting hands-on with the application teams to see what was really happening. We got engaged and kicked that off in the third quarter, and we focused on three applications that had the highest number of critical and high vulnerabilities.
There was some skepticism within Cathay, with people saying it was impossible to get rid of all these critical and high vulnerabilities. There were a lot of exceptions, and it was much easier to get an exception than to fix what was wrong. Part of the challenge was that the application teams did not know how to fix certain vulnerabilities. They were developers who wanted to build cool stuff, but they were not security engineers and did not necessarily know how to remediate some of those vulnerabilities. We worked with the teams, and for those three applications, we cleared all the critical and high vulnerabilities to prove it was possible.
This was important for the application teams to see that remediation was possible. In 2024, that's when the real heavy lifting started. The goal was initially to consider remediating the entire set of applications, but we rationalized that down and decided to target 50% to get started. In conjunction with that remediation of the vulnerabilities, we realized from that first engagement that there was a gap in the knowledge. The teams best positioned to fix those vulnerabilities are within the application teams, but they didn't have the skills to do that. This wasn't because they were bad developers; it's just that security development wasn't their forte, and they didn't know how to do certain things. So we launched a Security Champions program, which I'll talk about in more detail. With that program, we actually surprised myself at the end of last year when we hit 120% of the target we had set out. More than 50% of the applications were remediated to get rid of the critical and high vulnerabilities. It was a journey, and in that process, we managed to get 58 Security Champions certified.
This year became about how we get more efficient in doing that. Going through 2024, as Naresh and Tony had pointed out, there was a lot of manual labor going into this. You saw the metrics—45 days a month just to sift through the vulnerability exceptions and identify what was false positives. That's not sustainable. So we did a proof of concept and also launched our Level 2 Security Champions program. We're really excited that in the third quarter, we launched the first MVP of the DevSecOps Agentic AI capability, which substantially reduced the amount of manual effort required.
When you look at the progress, the light blue circle represents where that initial assessment was. We set out the green line, which was where we wanted to try to get to over time. Right now we're in that orange line. There's definitely been improvements. Having that baseline in the beginning is critical because you want to be able to show that you are making progress against those goals. When you're talking to your leadership, you can say, "This is where we were, here's where we're going, and this is how far we've made it along that journey."
There was a lot going on over the last three years. In Phase One, where we were predominantly focused on those first three applications, we set out basically three swim lanes of activities. One was about looking at the security design—what are the patterns, anti-patterns, and code samples that teams could readily apply to their solutions. The next was how we enhance and supplement the existing DevOps capability to make it more DevSecOps oriented. The last was looking at security test automation, because there wasn't that much security testing in an automated fashion. We dove deep and ran these in parallel.
While we were looking at those three core swim lanes, we dove into those three applications to start working with the application teams to figure out how to make this happen. It would have been really easy to go in as ProServe, fix everything, hand it back, and walk away. Then we'd be getting another call a year or two later saying we have a lot of vulnerabilities again and asking us to come back and help us do it all over again. So we wanted to really change that approach. In Phase Two, where it was about the remediation of a bigger set in the portfolio, it was really a parallel effort. While the application teams were working on remediating the vulnerabilities, we also were running the Security Champions program to help raise the awareness, skills, and capabilities of what the team needed to do to fix those vulnerabilities.
I would refer to it as book smarts and street smarts. The book smarts was the Security Champions program where we were running training sessions, and the street smarts was the teams actually getting hands-on and fixing the vulnerabilities through that process. This year, the remainder of the portfolio is in focus for getting remediated. It was really about the proof of concept we did with Agentic AI to help reduce the amount of human effort required in the process, particularly in classification of false positives and the exception and review process surrounding that.
Building Security Champions: A Three-Level Program to Embed Security Expertise Across Teams
I've mentioned the Security Champions program a few times, so let me explain who the security champions are and what role they play. There are really three levels. At the first level, we worked with Cathay and went to the application teams. We said we want a named individual from each application to join the Security Champions program for Level 1. That person would be in the application team who needed to keep an eye on security elements that they were doing. Previously, they did not have that responsibility. The application team would look at security as another team's problem, and the security team would say, well, we have these vulnerabilities and somebody else has to fix them. Level 1 was really to get the people aligned to that and support that shift-left mentality. It was intended to drive the fixes and address the security items as early as possible in the lifecycle.
With Level 2, that is the next level up. The focus there was on more senior people who have broader impact within the given areas. This is really a secondary decision layer. What you don't necessarily want to do is have a Level 1 application champion say everything is a false positive. Trust but verify. So Level 2 was really more of a verification process to ensure that what was being done was either truly a false positive or did require remediation. They were also supporting Level 1 in the decision-making process. That really helped in a shift-down of the security capabilities. Level 3, you don't need an army of Level 3 people. This is your one or two people, like within Amazon, we would call them a distinguished engineer. It's the person you go to when you have a security problem. I'm super lucky because on my team in Hong Kong, I have the OWASP chapter lead as actually one of my employees. So when it comes to things in the security field in Hong Kong, I have one of the best people on my team. I think Cathay appreciates that with having that type of knowledge and expertise. They help support Level 2 and we were also helping with Level 1, but it's also to look and take a forward view on what else is coming in the security space. There are new tools, security agents all heard about them this morning. There are constantly new tools and new capabilities coming out, as well as new threats surfacing all the time. That's where the Level 3 champion comes in. It's not as much about internal matters, but what is happening in the market, to bring that knowledge to make sure that internally it's going to be best addressed.
In that program, we had classroom type of training, and we actually ran a test at the end of the training to see how many people actually comprehended and retained the material. It's not an AWS certification, but it was specifically tailored for Cathay.
We had over 86 people participating in the program across 15 sessions. We supplemented the training with multiple components. The first was the book smarts, and the second was the street smarts, as I mentioned. We included hands-on labs, and in some cases, we didn't do a traditional lab. Instead, we went into one of the applications that had a problem and used that as a real-time example to fix it.
So we balanced the approach. It wasn't just a bunch of presentations. We got people with hands-on keyboard access to make it happen. AWS helped at the Level 2 and Level 3 champion modes, as well as helping enable that skills transfer across the team. One of the big things that we noticed was roles and responsibilities. As you look at moving from a DevOps practice to DevSecOps, there's something new in there. Somebody has to do that security part in DevSecOps. Is it the pipeline team? Is it the security team? Is it the application team? Who's going to do that?
Unfortunately, it's not one person. There are different capabilities and different elements that are addressed by different parts of the organization, and all of that has to be orchestrated together. Just for reference, this is not the exhaustive RACI matrix. This is a subset just as an example. The detailed RACI that we had included 220 activities that were itemized out, so it got pretty comprehensive on what elements were covered by which teams. DevSecOps agents represent the happy part of the story. How do we get the heavy lifting off of the people and let the computer do that undifferentiated heavy lifting, allowing people to do more value-add work?
DevSecOps Agentic AI Architecture: Automating Vulnerability Assessment with AWS Bedrock
So the agents come into play. We get a request in, and it goes into a matching node. The capability is pretty wide, ranging from a general inquiry to determining whether something is a false positive. It was a pretty complex system that we helped build out. It relied on a lot of work that we did over the preceding year and a half, where we had developed patterns, anti-patterns, and code samples. So we had a pretty good knowledge base to build off of.
That focused into the DevSecOps tooling, and a big part of that was needing to have large language models help us make sense of it all. It's not just a simple case of "I got a SQL injection vulnerability. Is this a false positive or not? Yes or no." There are a lot of things that go into actually determining that. What people were having to do manually, we had to build that out in the agents to be able to factor in. How many different things are we considering, including whether this has been reported before? Are there known remediations? Are there compensating controls in place?
Also, how do you handle a general inquiry? For example, if someone says "I don't know exactly what this vulnerability is," we created a knowledge base to explain what that vulnerability is and what it might be about. As part of this, Tony mentioned there was a hard blocking mechanism put in place. Once an application had cleared all of its critical and high vulnerabilities, there was a hard block. You could not go out with a new release that contained a critical or high vulnerability. It's a great way to make sure that you don't get yourself back into a position where you had too many vulnerabilities.
With that hard block, it was integrated with ServiceNow, so the agents are smart enough that if something has been determined as a false positive, it gets registered into ServiceNow as a false positive, which then would allow the pipeline to be able to continue on. Obviously, behind that, there's a bunch of AWS services that are out there. Probably the biggest part is leveraging Bedrock, and how we set up the agents and how we have the agents
This is a desensitized screenshot of what the agent would do. I've got this vulnerability. What is that? The agent went out, checked the knowledge base, checked the various sources, and with 95% confidence, this is going to be a false positive, and it gives some of the background on that.
But then I might have another vulnerability. For this one, server-side code, maybe insecure, but the agent did not have enough information. So it's prompting the developer. We need a few more details to make a determination on whether this is a false positive or whether there is a potential vulnerability there. So it gets to be a bit more dynamic.
And then there's also the knowledge of saying, hey, what is SQL injection? And it gives an example of what that is. So that's what we've built out, and there have been some great benefits. Thanks, Mark. So on the benefit side, this is market information which says that change management is 3 times more effective when you have DevSecOps compared to DevOps.
Results, Benefits, and Lessons Learned: Culture Change and the Path Forward
The recovery time is quite fast. If you notice, it's close to 2,604 times faster to restore an incident if you have DevSecOps. And this is very important when you have thousands of developers. You don't want people to be burnt out by fixing or trying to identify false positives and getting burnt out while fixing those. So it's almost 40% reduction in burnout.
And changes are 7 times less likely to fail if you have DevSecOps, because what you're doing in this case is identifying, addressing, and fixing those vulnerabilities, logic failures, or whatever, much earlier in time rather than doing it at the final stage of the launch.
What we found is 50% of the time the remediation costs compared to fixing at the later stage, but obviously because when you want to go live and you're trying to fix something and you have tons of things in and you want to put more labor and more developers and more staff to actually fix those items, which is going to increase your cost.
There's 75% reduction in detection and fixed critical and high security vulnerabilities, because again, upfront security is already inbuilt in your coding practices where it is not having critical and high vulnerabilities.
And there's 70% increase in security awareness among the development team. It's very important. There is one line which our leaders keep reiterating in most of the town halls. Security is everyone's responsibility. So it is not just the security team's responsibility. It is a responsibility of everyone from the person who's doing the conception to the design, to the development, to the testing, and so forth.
The outcome of Agentic AI was faster decision making. While we had to fix and identify whether those are false positives or what are the next steps that we need to do, Agentic AI was quite fast.
Shift left and identification of the correction of the vulnerabilities, because we were not at the right side of the launch cycle, but we were actually at the left where we were identifying and fixing those.
Consistency and accuracy. This is again very critical when you are having so many developments, about 1,300 applications to manage. It's quite important that you have consistency in fixing. So in this case, when you are fixing the vulnerability, you are fixing it across the board in
You are not just trying to address one application at a time. Workflow management lifts the undifferentiated heavy lifting from humans to agentic AI. Humans are always, in terms of cost effectiveness, if you can do a repetitive job with agentic AI, it is always good. There is one other item which I want to share: we always have a belief that culture eats strategy for breakfast. This is a quote that our leader mentioned from a book, and it is a very important quote.
Whatever strategies you might have, even if they are the best strategies in the world, if the culture of the organization is not adapting to those strategies or not changing, then they are going to fail for sure. So always go back and when we are doing these assessments, we come up with a very clear mindset about where we are and where we want to be. That is quite important. For the next part, I will give it to Tony.
Thank you, Naresh. We have had a very fruitful time and great achievements in the last two years. First of all, thank you to the AD bus team, especially the Hong Kong team, the Akang team, and the PEF team, and also our support from senior management. For the next year, we have many things to do. Because this is not a PSC project steering meeting, I will not go over the details. In a nutshell, there are a few items we want to cover in the next year.
First, regarding people, we want to have more security champions and more advanced skills in our security champion program. Second, regarding tools, we want more security scanning throughout the whole development life cycle so we can detect errors even earlier. The last item is about the observability or visibility of the application security posture. We want to have more insightful dashboards and more insights from actions that can be brought from these dashboards.
Regarding lessons learned, this is a big change management effort as a whole. There are a few items I want to mention again here. First, regarding culture, we know that culture change takes time and people cannot be changed overnight. However, we still need to set very clear guidelines for everyone and very ambitious targets, bite the bullet, and go for it with a can-do spirit.
Second, regarding tools, by default, no tool is perfect without customization or fine tuning. The tool is only one of the collections in your toolroom and is not functioning right. In some cases, we also need to work with the portal vendor, share our feedback with them, ask them to change, upgrade their version, and fix our issues. Last but not least, automation. AI also helps us with our automation. We try to move from human-in-the-loop automation to AI-in-the-loop automation. This is the key point. We can also help reduce some of the human effort, especially on false positive review.
I think we are almost at the end of our presentation. Thank you for your time to hear our story. We are more than welcome to share our experience after the sessions, and we love to hear your feedback.
; This article is entirely auto-generated using Amazon Bedrock.

















































































Top comments (0)