🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025-PwC & BMO: AI-Powered Finance Transformation -From Reporting to Treasury Insights
In this video, PwC Canada and Bank of Montreal share their two-year AI transformation journey in finance and risk functions. Ram Peddu, Chief Data Analytics Officer at BMO, along with Rakesh Shetty and Abhinav Ravi from PwC, discuss implementing AI in a heavily regulated banking environment. Key topics include building AI governance with fairness, transparency, and accountability; reducing pilot-to-production lead time; ensuring numerical accuracy in finance; and moving from individual use cases to organization-wide transformation using Amazon Bedrock and agentic AI. They emphasize speed to value, reducing three-month projects to days, employee upskilling through AI for All curriculum, and managing data sensitivity with role-based access controls and synthetic data. The focus shifts from experimentation to ROI-driven implementation.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: A Two-Year AI Transformation Journey at Bank of Montreal
Right, I hope you can hear me. Good afternoon, and thank you for coming to today's session. We're going to be talking about our two-year journey in 20 minutes, so just bear with us as we go through this incredible experience we had where we looked at AI as a powerful tool to emphasize the finance transformation journey for one of our clients. But before that, maybe just a quick introduction of the three of us here.
I'll start with myself, Rakesh Shetty. I lead our Finance and Transformation practice for PwC Canada, primarily focusing on financial services and optimizing finance technology for the future. Along with me, I have my client and my colleague Ram. Just a quick intro on your side. Hello guys, my name is Ram Peddu. I am the Chief Data Analytics Officer for Risk and Finance at the Bank of Montreal. Hey guys, I'm Abhinav Ravi. I lead the Cloud, Data, and AI practice at PwC for financial services.
So we're going to keep this interactive. I'll play probably the moderator role for a bit just to kind of tease out some experience that we went through the journey. So my first question will be for Ram. Overall, we've been at this for two years. Maybe we can just pause a little bit and talk about the bank and their AI journey in four key things. Why transformation? Why in corporate functions? What are some of the challenges we went through the last two years? What are the obstacles we faced, not only about implementing the solution all the way from concept to production, but more importantly, energizing our business users from a business adoption standpoint as well and the experience to it. So maybe I'll just talk a little bit more about the bank and your journey today.
So again, I'll be sharing some of the use cases that I've been working on. As Rakesh was mentioning, this is a journey we started about a year ago looking at finance and risk and what we could do from the AI perspective. Obviously, we are a financial institution. We are a heavily regulated industry like banking. The first and foremost thing that we needed to look at was building AI trust, where we are looking at the AI governance side of it, whether it is going to be fairness, whether it's transparency, whether it's accountability, all the things that you've been learning at this conference from the AI governance perspective. That was the first and foremost that we needed to make sure that we were doing.
And then the biggest challenge, as you can imagine, is building pilots. It is always easy, but taking a pilot from once you do the pilot, how do you take it to production? The lead time between a pilot and production, how can you close the gap? What are the challenges, Rakesh, you were asking? What are some of the challenges? It's how do you bring trust into the numbers that are coming, because it's finance and risk. The numbers are very important for us, even to the decimal levels. How do you make sure that the numbers that are coming out from the AI perspective are the right kind of numbers, and what kind of guardrails are you going to put on that, right? So that's where we started our journey. We have some interesting stories to share as we go across. Then we are also looking at the risk use cases within the BMO perspective. We'll be sharing more of them.
Building Capabilities and Measuring Value: From Use Cases to Enterprise Transformation
Yeah, and I think largely one of what we're observing across clients and even our learnings from BMO as a whole is it's the shift from moving from individual use cases all the way to transforming an entire organization. And when you get into that path, there is a lot of synergies that you need to be mindful of between different use cases. So like Ram was mentioning about having a finance use case, having a risk use case, but maybe content generation and content summarization are two common things that exist between the two use cases. So in terms of how you're architecting your models, how you're leveraging the different services, there could be a lot of common patterns that you could build within or between these two different use cases.
So that was one of the major learnings that we were having as we went through the implementation part in itself, which is talking about building more of the capabilities. We all tend to go one use case after the other use case, but can we build more of the capabilities that can be used across the different use cases? For example, we started with Amazon Bedrock as a foundation, but now once we build that, whether it's RAG or an agentic one, once we build it, the same kind of capability, for example, it's content extraction, you build it for one use case, you can definitely apply the same kind of thing against other use cases as well.
Just continuing about finance, some of us who have actually worked in finance, right, the last time something great happened to finance was Y2K in 1999. Some of us remember that, done to beat the clock and implement a general ledger. Here we are 25 years later transforming finance.
What we learned is when you are looking at your ROI and how do you identify the use cases, put them under three by three. The three pillars are: How can I make something like the process more efficient, whether that's your close process, whether that's your variance analysis? How can I be more effective? As Ram called out, the accuracy and the degree of precision is key because we are in a regulated environment and we actually are reporting these numbers to the street, so it has to be precise. The most important thing what we have also understood is AI, as much as we love it, is here to augment what we do.
So it's all about the enablement of the employee experience and how do you actually empower the talent for today. When you think about digital upskilling, when you think about having finance as an example, it's not about just learning accounting anymore. They need to be better storytellers. How do you actually come up with thinking outside the box? Some of the soft skills about enabling that employee experience is something that we've learned from our perspective.
Now you marry that to how do you actually measure the performance, and there's a question for Ram. We start off with what, three use cases. It was all focused on productivity, right? Can I reduce the amount of time I take to generate an MD&A report or something related to narrative or commentary? Then we moved on to, can I do transformation now? Can I actually stitch a few processes together? The third one is obviously disruption, but I would love to get your view as a Chief Data and Analytics Officer. How are we transforming, moving from Gen AI and of course now agentic AI?
I think the fundamentals are still the same, right? We all start with a business strategy. What is it that we are trying to achieve? For example, we stand here and we are looking at what we want to achieve from the finance perspective, from the risk perspective in 2026. We take that broad umbrella and say, okay, how can AI help from this perspective? When you look at the business case standpoint, we always go after the business value, right? What is the business value that this use case can generate?
Sometimes it is the efficiency, but there are a few other things that you might also want to consider. One of them will be feasibility. Yes, this is a great use case, but given the capabilities of what we have within the bank or within the industry, can we actually solve it and how long does it take to solve it? That is the feasibility side of it. Then the second thing you are looking at is how aligned are we as a bank and as a team. Is the team ready to work on it? Is the business leaders' executive support there? The business alignment side.
Since again we keep coming back to the regulation side, there is sometimes a risk of doing or not doing it. For example, a risk use case, there is a regulatory pressure on us to actually solve certain things. It could be a policy management, it could be something else. We're looking at that pillar as well. So one obviously is the value that you're going to generate. The second thing that you're going to look at is the business alignment. Third thing is the feasibility of it and the risk of doing or not doing. So that is going to be the pillars I would look at, and as we are going through, we are getting better and better at each one of them and taking them forward.
Navigating Speed and Innovation in a Regulated Environment
Question for you, Abhinav, your favorite question. Another thing that we learned, and some of us are learning on a daily basis, right, you heard something brand new this morning when we looked at the keynote. So what we have learned is every week there's something new that is coming in. For example, a use case that I built three months ago using a particular model, whether that's Claude or something else, three months later, there are newer gadgets on the tool. Now when you marry that to a regulated environment, what you see on TV doesn't mean we can use that at work as well, right? It becomes a balance between what AI as a hobby versus AI in practice.
Abhinav, you worked at other banks just like Bank of Montreal. What has your experience been about how do you actually leverage some of these new tools which are still not approved in a regulatory environment and have that balance? I think this is a question which could probably prompt its own session in itself because this is a challenge that I'm sure a lot of you are also dealing with on a day to day basis. There are two lenses that we've started applying. One is from a use case life cycle standpoint. The other is from your overall SDLC or your software development life cycle standpoint.
Now when there are new tools, it's always pretty encouraging. How do we start leveraging some of these new tools? They are probably not GA, some could be used in a lower environment, some could be used all the way up to production. But I think the real question that goes back to is when you look at the overall use case life cycle, you have a requirements extraction step, you have a solution design step, there is a development, then there is deployment and then monitoring and testing.
So at which stage do you want to really subject your new tool to be used is what again gets complicated within a regulatory environment. Within monitoring and deployment, you could probably leverage a tool, whereas it's a lot less safe when you're starting to use it for generating requirements in itself, or even more risky when you start using it to select models dynamically on the fly as well. So it's critical thinking that we need to actually layer in from a use case standpoint as to where in the development life cycle do you want to really leverage some of these new tools and new offerings that are coming about.
I want to talk about one thing. I was going to come to you to talk about speed to value and you talk about new tools. An average life cycle of a use case, and I'm just putting traditional agile where you do your design, build, test, and deploy, but the key is about how do you actually capture requirements. So think about some of us here as finance folks. We don't think what the good looks like because we're so consumed in what we do day in and day out. Ram has a great example where he took something that would have taken three months and he was able to do that in eight hours on the policy management. I mean it's just an example, Ram. I would love to get your experience there.
It's more of a challenge, right? We had a session together, both from the internal and external folks. We just got into a room and we said that traditionally it takes three months to bring a product as an MVP and put it in front of the users. How do we challenge ourselves to see what we can do? Because as Rakesh is mentioning, sometimes what happens is we build a use case, you take it to the users, and the users are looking at it for the first time and they're saying that I wish there are three other things that are there. Or I'll take an example. Six months ago we started on a project and we built it, but suddenly now, at that time when we started, maybe it was not agentic, and now suddenly the user is saying, hey, I'm seeing the other things in the market that are more agentic that are doing much better things. How do we change it?
So the big question for all of us is how do we think about speed? Speed meaning that instead of spending three months to put something in front of the user, can we challenge ourselves to put something in days? The things that were taking months, can we convert them into weeks? The things that were in weeks, can we convert them into days? That was the challenge I posed to the team and said that, hey, you go to these hackathons. I'm always impressed with the people that do the hackathons. Again, it doesn't mean what you build in a hackathon can go into production, but the idea is you put yourself in a constrained environment, constrained time with the tools that you have. Can you build something that you can actually take it to the user and showcase? So it is going to be more and more important for us as things are changing every day. New tools are coming. It is going to be important for us.
One thing I want to touch on exactly on the tool side of it, the users are coming and saying that, hey, I know that in my daily life I'm probably using it. Why is it not in the enterprise? That is something we need to answer. But from our standpoint, what we are doing is we built inside BMO, we built AI for All as a curriculum. We also built more of AI for Finance and Risk as a curriculum where it's a curated kind of course content where people can go through. They're not just at the periphery. They know what is AI, but if you want to get a little bit deeper into it, that is what they're spending time on. It could be on the foundations of AI. If you want to go advanced, you can do that. If you want to learn about agentic, you can do that. But you need to bring people along, right?
We all are here and we can build tomorrow. We can go into our own institutions. We can build the applications, but we need to bring people along. The way to bring people along is obviously we call it change management, but part of that is educating them, getting them not just about I know AI, but getting them more educated about, hey, tomorrow you might be using these tools in your day to day job in what you're doing today. Can you, for example, we rolled out Copilot for the rest of the bank. Are you using Copilot or not? Or when you're doing it, are you still doing the manual kind of requirements? Are you able to use this?
The other one is, can you bring your partners inside of the bank? Because of the regulatory environment, are we bringing our risk folks very early on, model governance folks very early on? Are we bringing our legal folks very early on or IR folks very early on? So what kind of people we need to bring together to start these use cases, that is going to be the success. It's not so much about we go build the use cases, bring it back, and hope, hey, we have built a great product, it will be successful. We need to walk through this mechanism to make sure we have the right people in the room from day one for this to be successful. And I think now looking back over the last six months, probably if you look at a data migration or a cloud migration effort, it's not going to be the same as how it was being executed for the past couple of years.
The whole mapping aspect of mapping your source and target, running your test scenarios or test scripts, or even generating your target code, all of this could potentially be automated end to end. So you don't need the same number of data engineers or the same number of testers across the whole lifecycle. It's all about how we could get it faster, quicker, cheaper, and probably in the most efficient and accurate way.
Data Governance, Synthetic Data, and the Road Ahead for Finance AI
One other point, and this might be a question for both, and then I'll go back to finance. At the end of the day, when you think about data sensitivity, especially in a public cloud environment, when it comes to banks, we always talk about PII data, the employee data or customer data, which is very sensitive, critical, and hence confidential. But in finance, if it's unpublished data, it is considered to be registered confidential. That means you can't just use a production set in your lower tier environment. It has to be regulated. So what would your experience be? I know we went through a whole journey about encryption and putting some guardrails around it. So a governance question for you, and for you more about leveraging AWS and how we're able to do it. So let me start with you on governance.
Again, this is not something new. Today, whether you are using AI or whether you're not using AI inside of the bank, we have role-based access controls that we have. But maybe because of the agentic systems, you might have to enhance some of those guardrails and the controls. For example, I was in the keynote session this morning and we were talking about creating a policy to say that an agent can do this, an agent cannot do this, an agent can access this, an agent cannot do this. For example, there is an approval, there is a journal entry in the finance. If it's ten dollars or below a certain threshold, maybe one hundred dollars, we will allow the agent to make the transaction, but above a certain threshold, we've got to create those policies. We've got to create some of those frameworks. We do have the AI governance, as I mentioned. The big points of that are the fairness, the transparency, and the accountability as we are doing it. You might have to build a few more controls as we are thinking from the PII level.
But one other thing I would say is data is always a challenge for building AI. But some of it, when you are in the MVP stage, you can also build very high quality synthetic data for you to start working with and building. Something for us to think about is where we were three or four years ago versus what we can do with the synthetic data today is much different.
Exactly, and I think we have Anthropic's stall close by. So at BMO itself, as you may have seen, we started building a number of these agentic systems where it's not only about creating that synthetic data, but how can you make it as representative as possible for training your models. And taking it a step further where you're having the agents generate what should be your different parameters to fine tune the ML model that you would actually be building that runs on those training datasets to further refine it. So it's just taking synthetic data, which existed for quite some time, but how do we make it take it to the next step all the way and go all the way into the ML enablement too. So that's our journey to date.
I want to just finish off. We've got one minute left for maybe a last question to Ram to kind of wrap it up. What's next, Ram? What does the next five years look like for finance? Hopefully we have a better story to say every year we come here. We'd love to get your view.
Yeah, so look, last year was all about experimenting, but this year is going to be all about return on investment. So it all again starts with a business strategy. What is it that the business wants to achieve in the 2026 timeframe and how do we help from the AI perspective? We have really high quality use cases that we have both on the risk and finance perspective where we are putting together with a clear, as I said, the business value of it, the risk of doing it or not doing it. You need to bring that framework together. Put what is your top quadrant, where are those use cases that are high in business value, high in the feasibility, and go after them. But things are going to change and I'm really excited for what I'm seeing in the conference. And it would be remiss not to say thank you to PwC for working with us and accelerating our work, and also from the AWS perspective, from providing the tools, the platform, and helping us more accelerate our work.
Awesome. I want to say one last thing before we go. We all grew up saying people, process, technology. We're going to walk away today thinking about people, process, and performance. Technology is like electricity. It has to be here. It's how we use it for the best. Thank you so much for joining the session. If you have any questions, we're going to be here. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.



Top comments (0)