🦄 Making great presentations more accessible.
This project enhances multilingual accessibility and discoverability while preserving the original content. Detailed transcriptions and keyframes capture the nuances and technical insights that convey the full value of each session.
Note: A comprehensive list of re:Invent 2025 transcribed articles is available in this Spreadsheet!
Overview
📖 AWS re:Invent 2025 - Marketo's Digital Transformation (ISV201)
In this video, Adobe shares their journey migrating Marketo, their B2B marketing automation platform serving 5,000+ customers across 5 data centers, to AWS. David explains how they started in 2022 with technical feasibility, moved backend services in 2023, migrated their first customer (AWS itself, with 40 terabytes of data) in 2024, and closed their first data center in 2025. Key migrations included replacing five Hadoop clusters with Spark on EKS, consolidating 13 MongoDB instances to Atlas, and moving 170 MySQL 5.7 clusters (2 petabytes) to Aurora. They migrated 5,000 Solr VMs and adopted ElastiCache for Redis. The hybrid approach paired on-premise data centers with AWS regions, eventually achieving full AWS-native operations. Critical success factors included AWS MAP funding, Professional Services support, and managed services enabling dynamic scaling impossible with fixed data center resources.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Why Adobe Marketo Chose AWS: From Five Data Centers to Cloud Migration
Hello, welcome, and thank you. Just by a quick show of hands, how many of you here have used any Adobe product like Acrobat for PDFs, Photoshop, Marketo, or any other product? That's basically all of us. We're a very close, strongly knit family. Thank you all for joining. Another question for you all. How many of you are working on a large complex migration, either for your own company or for your customer or for a partner? Awesome. And how many of you are working on an ambitious agentic AI project? Are you starting or you're already running a large application? Perfect.
So we're all at the right place. In the next 20 minutes, Gaurav and I, along with David, will share two core messages with you. One is if you are running a complex migration, what should be our best practices, learnings, and how should we really approach a large complex migration. And the second, how migrating to AWS is not just relevant for AI workloads, but almost quintessential for us to be successful. And if you have any questions, we'd be happy to stick around and answer your questions on both these pointers.
But before we get started, just a quick introduction of what Marketo really is. So Marketo is Adobe's B2B marketing automation solution. It is very likely that all the re:Invent emails that you received, or if your marketing team had sent out emails to your customers and partners, it is possible that it was done through Marketo. There are about 5,000 plus customers using Marketo across 5 data centers and 3 continents.
But this is what really makes this whole migration very special. When we look at Marketo, a lot of us may think it's a product, but essentially Marketo itself was a company acquired by Adobe in 2018 for $4.75 billion. And before that acquisition happened, Marketo as a company had acquired additional 7 companies, so it was not just about migrating one application to AWS. It was about migrating a collection of different tech stacks across different companies over to AWS, and some of these challenges are not about solving in a week or a month or a year. It takes a whole journey to migrate a complex application like Marketo to AWS.
And to share the whole journey and to perhaps also help you understand the quick bits of how do we migrate faster and safely, please welcome David on stage. So, I'm pretty surprised to be here, not that they didn't come up to me a few days ago and say, David, you need to present, but when we were acquired, there were actually 4 data centers, not 5. And so the first thing that Adobe did was build a 5th on-premise data center. So when Adobe acquired us, there was no thought of going to public cloud. We were a private cloud-based company and that was that.
Even to amplify that point, when Adobe bought us, it ended a migration into a different public cloud platform. So there wasn't a lot of appetite on our side to do this. So what changed? Well, we'll talk about that in a little bit, but you can see here what our journey looks like. So in 2022, we did some technical feasibility stuff. 2023, back-end services started to move a little bit. 2024, we actually moved a customer to solely running on AWS, and in 2025, we have closed one of our data centers and we are set up to close the rest over the next 2 to 3 years.
So, what changed? Why did we do this? Well, I'm going to start with number 3 there, legacy infrastructure challenges. When we built that 5th data center, it was really hard. We hadn't built a data center in a long time. We had bespoke versions of certain things. The biggest problem being Hadoop. Bringing up the Hadoop cluster was a nightmare. We'll get back to that in a little bit.
Other reasons, of course, are global scale and volume. A data center is great for a known workload, but as the workload changes, as customers grow in scale, you have fixed resources in the data center, and you're pretty much stuck with that. Whereas in AWS you can tune things, you can make things better. So that's part of the reason. Another reason is zero downtime or very, very close to zero downtime. If AWS is down, that's global news, right?
If Marketo's down, it's like, too bad, but it doesn't make the news. And so the data center that we closed this year actually had a catastrophic failure back in 2023, so customers in that region were never really pleased. They were happy to hear that they were moving to AWS. And the fourth reason is really future looking. When you're on AWS, you are exposed to the whole agentic environment, and so it gives you so many possibilities.
The Migration Journey: From Hadoop and Mongo to Full AWS Deployment
So let's get into some details here. We did not move everything at once. In fact, the things we moved at first, nobody even realized we were moving into public cloud. So let's go back to my friend Hadoop. Time had passed. We now had five clunky Hadoop clusters, and security told us we had to upgrade them, and we could not. We talked to our vendor. I see them over there. They couldn't do it. They told us, we'll build you a new one. And that's the last thing we wanted to do is invest in more hardware to build another bespoke thing.
So we really broke down, well, what does Hadoop do for us? And the biggest use case for us was Spark streaming applications. And so we did one of the proof of concepts back in 2022, which was running Spark streaming on EKS, specifically the Adobe bespoke version of EKS, which is known as Ethos. Once we proved that that could work, we moved 20 different flavors of Spark streaming jobs out of the data centers into AWS. And nobody really realized that, hey, 20% of our workload is now quietly in the cloud.
So the next one was Mongo. We had 13 different flavors of Mongo for different applications and a team of two to manage them. That wasn't scalable, but luckily we have good friends with Mongo. Hi Lauren, and they convinced us that Atlas was the way to go. So over the past few years we have migrated all of our different individual Mongos into Mongo Atlas, again running on AWS. So now we're at about 40% of our applications running on the cloud, and nobody really noticed anything. If anything, they saw benefits, but they didn't know where they were coming from.
When we fast forward to 2024, actually end of 2023, one of our biggest customers, who we might be at their conference right now, wanted us to run Marketo on their public cloud platform. And I said, huh, but then I thought about it and realized that we were already running in a hybrid bridge environment like it says here, so it's just taking the next step. And so we had to figure out what application had to be fully running in AWS in order to support the main application. So we did that work and we spun up those applications in AWS and in a partner data center. And then we built a pod and then we migrated the customer, and that was no mean feat, let me tell you.
AWS was by some measurements our biggest customer. It says on the slide 40 terabytes of data, but when we started the process, they were closer to 90. So to even migrate them we had to prune their database, and that took some convincing. But in the end it made the application work quicker, so we got them down to 40 terabytes. And then how do you get 40 terabytes from one location to the other and still not have any downtime? That took a lot of tooling, a lot of, okay, let's take it from Pure Storage here into a Pure Storage appliance on AWS, rehydrate it into RDS, and start syncing the data back from the data center.
So after a couple of months, yes, a couple of months it took for the data to catch up because that's how many activities AWS does on a daily basis. Once we were caught up, then we did the cutover. So we shut off their on-premise one, did a bunch of metadata transformations, and poof, AWS was up and running on AWS, and we jumped around for joy, wiped our heads and said, what's next? But we didn't. We wanted to let that sit for a while.
So AWS ran on AWS for months happily. Performance was better. Campaigns ran faster. I'm thinking, okay, how do we do this at scale? So we went back to our problematic data center because, you know, real world things come into play here, and the real world of data centers is they're a big investment. And so our APAC region was up for renewal in February of 2026.
You'll note it's not February of 2026 yet, but when you're migrating out of something, you need to plan well in advance. So we started working on this at the end of 2024, figuring out what it would take to have a fully AWS-only data center. In these hybrid regions, some of the infrastructure like the load balancers are running on-premises while the database in the pod is running on AWS, but here we wouldn't have a data center to prop us up. So we had to figure out all the other components and how they would move, and we did that over time.
Once we got everything running in the APAC region of AWS, then we put some test subscriptions there. Everything seemed to be okay. We migrated a few test customers, and that was fine. Then we needed to migrate at scale. Now when we migrated to AWS initially, it was a single instance, and we are now doing pods. Marketo's core unit is a pod, which includes front end, back end, database cluster, and about 100 to 200 customers are on a pod. So rather than moving a single one at a time, we would move them en masse. The data movement stays the same, but some of the metadata transformations are a little bit different. And then we get to the future.
Leveraging AWS Managed Services and Getting Started with Your Migration
So the future is getting rid of all of the data centers, being fully AWS native, which will give us all of the amazing things that AWS brings to the table. This shows you kind of how we paired our data centers with AWS Regions, which was essential to our hybrid approach. If you look in Australia, that red dot should be gone, will be gone. So here we are. What are some of the things that changed? Well, how did we take advantage of managed services?
Well, we have a bunch of MySQL 5.7 clusters, about 170 of them. Put together, that's about 2 petabytes of data. We're moving that into Aurora over time, and like I said earlier, we've gotten much better performance from Aurora as is. But the beauty of it is, like re:Invent happening, we scaled up the database so that AWS could market to all of us and send out all the emails that make this event happen. When the event is over, we'll scale it back down. You cannot do that on-premises, so that in and of itself is amazing.
We talked about Hadoop, so Spark on EKS, I don't think we need to talk about that too much more. Solr 8 is our biggest footprint in the data centers. We have 5,000 physical VMs running Solr. Solr 9 we can run on EKS, runs a little bit differently, but again it runs better, it runs faster, so that's a big win for our customers because we use Solr to speed up search, not too surprisingly. Redis, we've been great fans of Redis, but not having to manage Redis, letting ElastiCache do the work for me, has taken a real load off of my team.
And then finally we talked about Mongo, but they deserve to be mentioned more than once because the amount of time and energy they have spared my team and the expertise they provided in doing the migrations almost for us, because they provided the tooling, invaluable partner. All right, speaking of another invaluable partner, AWS, we could not have done it without their help. The first line there is MAP funding, so that's the Migration Acceleration Program. I will not speak on that, but you feel free to ask the AWS people how that works.
Again, proven migration methodology. They've migrated people before. Now granted, they hadn't done what we did, but they brought a lot of knowledge to the table. They brought the ability to say, oh, your bottleneck's here, let's unbottleneck you, and so amazing partners there. A lot of that work was done in tandem with AWS Professional Services. They did a lot of the heavy lifting on automation of migrating from Solr 8 to Solr 9. They've helped us make software load balancers to replace F5s, among many other things. So thank you, AWS Professional Services.
And an advantage, one that doesn't really exist in my world, but when I talk to the sales people, the ability to co-sell with AWS is amazing, right? People want to do that, and so being able to say yes, your pod will be on AWS has made huge changes in the way we upsell.
Oh, what does this slide tell you? This slide tells you that my stuff is at the top three things. It's the application layer. The other two layers are telling you what AWS is bringing to the table. So the foundation layer is on the bottom there because it's foundation, but really it belongs closer to the app from a functional point of view. And then the intelligence layer are all the add-ons that AWS is bringing to the table, the ability to use Bedrock for AI, EKS, all those things.
So I believe you want to know how do you get started. So my friend Gav will come back here and he will tell you how to get started. Well, thank you, David. You know, typically it takes about 6.5 months for a successful mission from Earth to Mars, but if started wrong and if you do everything else right, it can take up to three years longer. So the message is for all the mission critical things, your AI ambitions start today and start well.
And to make it easy for you to start well, there are some of the resources that David had been able to use, and we have these available for you all. We have a few migration immersion days. Aurora, as he mentioned, was very critical. We have a database migration workshop and also some inputs on the Migration Acceleration Program, like you mentioned, to get you started fast. We'll be here if you have any questions for us, but thank you all for joining us today. Thank you.
; This article is entirely auto-generated using Amazon Bedrock.












Top comments (0)