🦄 Making great presentations more accessible.
This project aims to enhances multilingual accessibility and discoverability while maintaining the integrity of original content. Detailed transcriptions and keyframes preserve the nuances and technical insights that make each session compelling.
Overview
📖 AWS re:Invent 2025 - Maximizing uptime: Itaú’s mission-critical mainframe migration to AWS (IND3304)
In this video, Eduardo, Wilson, and Edinei from Itaú Unibanco share their journey modernizing a 50-year-old mainframe checking account platform to AWS cloud, targeting four nines availability. They detail implementing a cell-based architecture using AWS CamaZero framework for fault isolation, choosing DynamoDB for ACID transactions achieving 1,200 TPS with 79ms latency, and solving hot partition issues through smart schema design. The presentation covers their router layer with full table mapping, active-standby replica setup with quorum-based replication, and dark launch strategy for safe migration. Key insights include how IAM policy optimization improved DynamoDB performance and using TransactWriteItems API with single-threaded processing to maintain account balance accuracy while processing 1.4 billion monthly Pix transactions.
; This article is entirely auto-generated while preserving the original presentation content as much as possible. Please note that there may be typos or inaccuracies.
Main Part
Introduction: Itaú Unibanco's Century-Old Banking Legacy Meets Digital Transformation
Hello, everyone. Welcome. Let's start with a quick question. Raise your hands if you are in the path or planning to migrate a critical system that needs high availability even during the migration. Yeah, I think most of you are. And for my curiosity, who is rewriting a critical system like from mainframe COBOL to AWS? Oh awesome. You're brave. I hope we can help you today.
My name is Eduardo and I'm here with my friends Wilson and Edinei from Itaú Unibanco. We are going to share the journey to modernization, focusing on the technical details that you can actually use on your own systems. In this session, we'll talk about what is Itaú and the challenges they are facing, their cell-based architecture for high availability, how DynamoDB is supporting their checking accounts platform, and at the end, the key lessons we learned along the way. Now I will hand over to Wilson to let him talk more about Itaú.
Wilson: Thank you, Eduardo. Hi everyone, I'm very happy to be here today. It's great to be here to talk about how we are modernizing our mission-critical system. Before we dive deeper, let me introduce Itaú. Itaú Unibanco is one of the largest banks in Latin America, and last year we celebrated 100 years of history. We have been able to deliver strong results consistently over the last few decades.
Over 70 million customers have accessed a full suite of financial services from credit cards to cash management, from retail to wholesale. But even after a century of growth and transformation, one thing has always been true: change never stops. 10 years ago we realized that digital was changing the way everyone was doing business in Brazil. This was very prevalent in the financial service industry.
The Pix Revolution: How Instant Payments Changed Brazilian Banking Forever
For those of you who are seeing this change in behavior, please raise your hands. Great, we are also witnessing this transformation on how customers consume products and what their needs are. Nowadays, they are more connected, more online, and they need information as fast as possible. Let's take a look at a few examples of how changes in behavior show up in our daily lives in Brazil.
The population is using WhatsApp to make payments and transfers, paying using QR codes at points of sale, managing their investments online in real time, and we have Pix. Pix is an instant payment service launched by the Brazilian Central Bank in the last quarter of 2020. It was designed to make payments and transfers across banks in under 10 seconds. It has become the most widely used service in Brazil.
Around 90% of Brazilians use it every day. It's simple, fast, and fully digital. The platform is open, interoperable, and API-driven, available 24/7, including weekends and holidays. Last month, 7 billion transactions were processed through Pix in total. 20% went through Itaú. In other words, 1 in every 5 Pix transactions runs through our checking account platform, in addition to many other transactions from services that I mentioned earlier.
Our process approves or rejects payments based on available funds, overdraft protections, account restrictions, and updates the balances according to the result in real time. All this change in behavior demands a response from society, technology, and especially from us, the banks. As people adopted simpler, faster ways to move money, we had to rethink how our core system could keep up. That was the turning point.
Back in 2018, we decided to embrace this transformation. We made a critical decision to scale up our modernization, and the outcomes we've seen since then have been truly amazing. Last year, right here at re:Invent, our CIO Ricardo Guerra announced an ambitious commitment to migrate 100% of our platforms to the cloud by 2028.
Now you might be wondering what do I really mean by modernization. Well, in our case it's about evolving our checking account platform, a core banking system that serves millions of customers daily.
This platform has been running on the mainframe for more than 50 years, and now we are moving it to the cloud. This new platform should be available whenever a customer needs it. That's why we are targeting four nines of availability, meaning less than five minutes of downtime per month. But most importantly, the entire core of this evolution is to make sure our customers' checking account balances remain safe, accurate, and consistent. That's our non-negotiable requirement.
Event-Driven Architecture: Building the Foundation for Real-Time Transaction Processing
To meet those requirements, we had to completely rethink how our systems are designed and connected. That's what led us to the architecture we'll see next. Here is a high-level view of our current account platform, the one we've been rearchitecting since 2019 as part of our journey. It's an event-driven architecture built on a command and response pattern, and we use Kafka as a backbone to connect our microservices, ensuring smooth, reliable communication in real-time.
Now I'm going to show you a basic flow for authorizing a single transaction. First, the account service implements idempotency, validates transactions, and applies key business rules, ensuring that every operation is consistent and reliable. Then the authorizer processes each transaction and synchronizes with our card ledgers, so data remains up to date in both platforms. And finally, the dispatcher sends the results back to the requesters and pushes the events to other systems that need to be informed about these transactions, such as the account statement platform.
This current platform has been running at scale for years, processing millions of transactions every day, but our journey doesn't stop here. We continue to evolve, pushing for greater resilience, availability, and performance. The next few slides will take you through how we are making this happen, how AWS has been helping us, and how Amazon DynamoDB has become the key source of this transformation. I'll let Edinei take over from here.
Cell-Based Architecture (C0): Achieving High Availability Through Failure Isolation
Sure, thank you, Wilson. So I'm glad to be here sharing with you our modernization journey of one of the most critical systems for a bank. Given that failures are an inherent part of any complex system, we often face situations where something goes wrong. Here you can see our CIO Ricardo Gaja and Amazon's CTO Werner Vogels. And as Werner Vogels says, everything fails all the time, and probably there's something failing right now somewhere.
When we face a failure, it can either impact all our customers or just a portion of them, reducing the impact. So I have a question for you. Would you prefer that all your customers be affected by a failure or just a portion of them? So raise your hand if you would prefer that only a portion of our customers be affected by a failure rather than all of them at once. That's good.
So that's why we are planning our new architecture based on CAMA Zero or just C0. C0 is the AWS framework's implementation for critical systems. C0 was presented at the 2022 re:Invent, and you can watch it using this QR code. The cell-based architecture helps us achieve higher availability by isolating failures, giving deployment safety, and helping with scalability. Each cell is designed to be a fully autonomous unit without any state shared between them. It also has a bounded size and capacity.
This approach not only enhances availability and fault isolation but also simplifies scaling by adding more cells rather than increasing the size of existing ones. This helps avoid unpredictable performance issues and side effects that adding more resources into an existing cell can introduce into our platform. And isolating customers into cells requires a new layer in our architecture, the router layer.
The router layer is a thin layer designed to direct our customers' transactions to the right cell. It allows customers' data to live in only one specific cell. That means that customers have affinity to the cell to process their transactions. In our case, customers' data must not be replicated across the cells in any situation. But if you have a case with more static data, you could replicate the data across your cells and use shuffle sharding. Amazon Route 53 is an example of this approach.
But how can we direct our customers' transactions consistently to the same cell? Here we are going to show four algorithms that can address this issue and why choosing the right one is a very important decision for your cell-based architecture. The full table mapping is our simplest and most flexible solution. You map each of our customers into a specific cell, but you should take care of how much storage it takes since you should avoid heavy operations in a router layer to make it as fast as possible.
We can also do a prefix range-based mapping, where we map each of the prefix of a range into a specific cell, but this could lead you to a hot cell. The naive modulo uses a modulo arithmetic to map your customers into a specific cell. This is simple to implement since we need only the number of cells, but it's harder to scale because when you add a new cell, you must rebalance your customer data considering this new cell.
We can also have consistent hashing where we also use the naive modulo, but instead of using the number of cells, we use a large number of buckets and then we map each of these buckets into a specific cell. With this strategy, it helps you to reduce your migration scope because you choose when to migrate each of these buckets to a new cell. You should use an override table to map a few specific customers into a specific cell as needed in your strategy.
Moving customers between cells is an expensive and data-heavy process that should be avoided as much as possible, and the partitioning algorithm must be consistent so your customers' transactions need to be processed where their data lives. The full table mapping was the best fit for us since we need more control over customer placement, and when we add a new cell, we migrate the customers one by one as needed, or we just let the new customers be mapped to the new cell.
The router layer also follows the cell-based architecture. Each router instance works completely independent from each other. The router job is simple. It just takes the message and routes to the right cell. You should try to keep all the data needed by your partitioning algorithm in memory, so the decisions are made quickly. Each of the router instances can talk with every cell, giving us both speed and resilience. In our case, all the router instances are part of the same Kafka consumer group, so it takes the message from the Kafka command topic and sends to the cell over gRPC connection.
The cell is designed to be a fully autonomous unit. It owns its data and it processes its own transactions and can work completely independent from each other, but independence alone is not enough. We still need resilience, the ability to keep serving traffic even when something goes wrong with that cell. That's why the cell has replicas. The cell replica is an active copy that mirrors the state of the primary cell. And together they ensure higher availability and fault tolerance across the platform.
Your cell can have as many replicas as needed, and this number has a direct impact on your system's resilience. Also, to improve your system's availability, it's the best practice to place each of your replicas in a different availability zone.
Active-Standby Replicas and Data Replication: Ensuring Strong Consistency at Scale
And these replicas can work in different setups. With the active-active setup, all your server replicas are active at the same time. A general router layer works as a load balancer, distributing the requests across your server replicas. This setup fits better when you can rely on eventual consistency. And remember that all data changes must be replicated across your replicas so they will stay consistent over time.
Now, in active-standby, only one replica is active at a time. And this fits better when you need strong consistency. This was our choice for the cell replica setup because our customers' account balance needs to be strongly consistent.
And how can we deal with the cell recovery? In the active-active setup, if one replica fails, the router will rebalance the traffic across the remaining replicas. And if another one fails, it will happen again and your remaining replica will have to deal with the whole traffic. It's important to keep that in mind when you are defining the right size for your cell replica.
Now, in active-standby, if your cell fails, you need to ensure that the data replication across your remaining replicas is up to date. When it's done, the router shifts the traffic from replica A to replica B, and then replica B becomes the active replica. In our case, with three cell replicas, we can lose one and still be available to our customers. But if you lose two or more replicas, we can't ensure the account balance durability, so that cell becomes unavailable. But it's important to remember that the other cells can keep running and only a portion of our customers will be affected by this failure.
And then how does the data replication work? The active replica receives all the traffic from the router layer, and the standby replicas replicate the data asynchronously. So, during this process, we use a quorum-based replication model. That means that a subset of replicas replicates the data synchronously while the remaining ones replicate the data asynchronously. We use a journal component to ensure the replication. So, when the active cell replica receives a transaction, it applies all the business rules, and when the information is ready to be stored, it does not write directly to the database table. Instead, it uses our journal component which coordinates the persistence across replicas and waits for acknowledgment from the calling replica and one of the quorum replicas. So when it's done, the journal component sends an OK back to the authorizer application. If something goes wrong, the journal sends an undo request to everyone, so they will stay consistent over time.
And this is our cell replica architecture. The cell replica architecture has all the applications and components needed to run the workload. So, we have a router layer that sends the transaction requests over gRPC connection to the authorizer application. And when it applies all the business rules, updates the customer's account balance, and needs to persist the information, it uses our journal component to make sure of the replication and the durability. So when it's done, we send the response to an Amazon SQS. Using SQS helps us to decouple the synchronous authorization flow from the dispatch process. So when the authorizer application sends the response to SQS, it also sends an OK back to the router layer, and then the router layer commits the Kafka offset for that message. At the same time, the dispatcher application receives the responses
and writes to DynamoDB, always using our journal component to ensure replication and durability, and then it sends the response to the requester as well as to the other platform services like the account statement. There's a big change here. We no longer depend on the mainframe to process our clients' or customers' transactions. This makes the system faster, simpler, and much easier to evolve.
Okay, so let's do a quick recap about how the cell-based architecture helps us to meet our availability requirements. If one replica fails, another synchronized one quickly takes over because the data is already replicated. But if two or more replicas fail, then that cell becomes unavailable. However, it impacts only a portion of our customers because the other cells keep running. If you need to scale, it's simple. We just add more cells rather than increasing the size of existing ones. But you might need to migrate some data depending on how your system handles the partitioning.
But the cell-based architecture is not perfect and it's not for everyone. It brings huge benefits for critical systems, but also adds complexity. You need to invest in a router layer since your customer's data is isolated in a cell. You also should avoid migrating customers because it's an expensive and data-heavy process. So if you want to go deeper into this topic, I recommend a great white paper by Hobbs Oliveira that was even mentioned at the last year's re:Invent Keynote.
Why DynamoDB: ACID Transactions, Predictable Performance, and Avoiding Hot Partitions
And did you remember our non-negotiable requirement, the account balance accuracy? So let's take a deep dive into the AWS services that make this happen. Eduardo, can you tell us more about this? Sure, thank you, Edinei. So, now that you know the architecture of the platform, let's talk about what database to choose and more importantly, why.
So remember, Itaú's main requirement is to keep the account balance safe, accurate, and consistent at any time. On top of that, they need to authorize the transactions in under 100 milliseconds, authorize 6,000 transactions per second at peak times, and allow some big accounts to authorize 1,000 transactions per second. The database you pick will allow you to hit these numbers or not. Together with GFT, an AWS Professional Services partner, we tested SQL Server with Amazon Relational Database Service using in-memory database, Amazon Quantum Ledger Database, Amazon Aurora, and Amazon DynamoDB. And as you have already seen, we chose DynamoDB because it gives high availability, so it gives Itaú the requirements that they needed and the 99.99% requirement. So DynamoDB can process that. It gives predictable performance regardless of the size of the table, and it also has less operational work.
Before we go technical, let's talk about two critical principles. The first one is idempotency. So if you receive the same request multiple times, you should process only the first one and return the same result for the following ones. And the other principle is isolation. When you have two processes trying to update the same records at the same time, one process's change must not override the other one's. These two principles are really essential for financial data that we must keep accurate at any time. So we need operations that are atomic, consistent, idempotent, and durable.
Who knows that DynamoDB provides ACID operations? Yeah, in some venues that I've presented, the same thing, most of the people didn't know that.
I will tell you how DynamoDB helped us achieve this. Let's talk about the table schema that we chose, and it must match what we are trying to do. The design needs to have the idempotency of the transactions and the isolation of the balances recorded chains. You see here that we can use the primary key as the account ID and the sort key as the transaction ID for the transactions. But I think some of you have already spotted a problem here. For that big account that needs 1,000 transactions per second, this is creating a hot key.
But with a small change, we can have the same idempotency that we need. So if we receive multiple transactions, we will detect the duplicate transactions, right? We have detected that, and with this change, we spread the transactions across the partitions, so we are avoiding the hot key problem. Now you might be thinking, okay, so let's create a global secondary index so we can list all the transactions of an account. But that's not a good way to go because you will be creating a hot partition on your global secondary index that will throttle, that will create back pressure on the main table that will also throttle.
So the best approach here is to offload your record chains to an S3 bucket using the DynamoDB Streams and use Athena to query something this way. Also, you might be noting here that at the balances table, we add a version property. This is for the optimistic locking that I will talk about a little bit, and it provides the isolation. Before we talk about transactions, I will do a quick quiz here. So what's the size limit of a partition key? How much data can you put in a partition key?
So raise your hands if you think it's 10 gigabytes. Okay, I kicked a bit far, but okay, I think most of you would raise your hands on unlimited data, and I'll tell you why it's unlimited data. The 10 gigabytes that some of you raised your hands for is when you use local secondary indexes because it's based on the item collection. In DynamoDB, an item collection is all the items that share the same partition key. And if you use local secondary indexes, that 10 gigabyte limit kicks in.
If you're not using local secondary indexes, it's unlimited because the item collection is spawned across multiple physical partitions. And for us, it was really eye-opening when we were testing and we discovered that we don't have any limit on the same partition key. So now let's go back to talk about transactions and how we get the ACID operations that we need. When you deal with transactions with relational databases, you create the transaction and you commit and you roll back. It's straightforward. In DynamoDB, you use the TransactWriteItems API call, and it provides the optimistic locking using conditions.
For the transaction records, we use the condition attribute not exists, so it checks if that transaction ID doesn't already exist in the table. And for the update of the balance, we use a comparison if the version that is stored at the table is the same one that we used to authorize the transactions. If it isn't, that says that some other process changed the balance, and we need to go back and try again. Also, you see that there is a client request token property at the beginning of the API call. That's to make the TransactWriteItems call also idempotent.
So if we send the same call to DynamoDB for any reason to retry, DynamoDB will return the same response. And now what happens if a condition fails? DynamoDB will return information saying what item that we sent in the API call didn't match the condition that we asked for.
And then in this case here at Itaú, because of the architecture that they showed us of the archive and standby cells, it's a rare condition that happens, but it might happen when we are failing over to another replica and so on. So in these rare situations, we accept the slowdown and we do it properly.
If the problem was because the version of the balance wasn't the one that we needed at the table, we invalidate any cache that we have in memory, read again the balance from the table, and we reauthorize all the transactions. But if the problem was a transaction that was duplicated, we read the response for that transaction from the table response, and we send the response right away and we authorize the remaining ones. This adds latency, but it keeps the balance really safe, and this is a non-negotiable requirement.
And now let's talk about performance. We know that data is safe and one thing here is that DynamoDB wasn't the fastest database that we tried, but it gives us a more valuable item that we want, which is consistency. Like when you use relational databases, your statistics change during the time, which affects your execution plans, and then it slows down your query, your select, and so on. And you might be needing a DBA tweaking things all the time. With DynamoDB, regardless of the size of the table, the speed and the performance is consistent.
For the 1,000 most demanding scenario, that is for those 1,000 transactions for some big accounts, we could reach 1,200 transactions per second on average and 79 milliseconds latency. And this is really good for them and it shows that they can achieve the performance that they need. And to achieve that 6,000 transactions, we only scale the number of cells, so it's pretty straightforward.
And how can you get the same performance that we get here? First of all, is to avoid the hot partitions that we already talked about in today's schema slide. So choose the right schema for what you need and spread the transactions across the partitions. The second one is to use single-threaded processing. What happens here at Itaú is when we receive a batch of transactions from Kafka to process, we group them by account ID in memory and we send out the transactions of the same account ID to the same thread.
So what we need here is to not have multiple processes or multiple threads process the transactions of the same account. This way we avoid the race conditions, so the problem from the conditions of the transactional items, and we keep the performance at top. Another thing that you can use is the best feature of the transaction write item API call. The transaction write items can pack up to 100 items in the same call.
So for Itaú, it means that we can have one financial transaction and one balance update for 50 accounts, or 100 financial transactions and one balance update for a single account. And what happens if you don't deal with the race condition? So this is the number we got with only one thread processing the transactions. If you have two concurrent threads processing, the performance goes down 15% and the latency goes up more than double.
And if you have four threads processing the same account, the performance goes down another 15%, and the latency is 5.6 times more, and it is unacceptable for the needs. And you might be thinking it was so easy to get these numbers. No, it wasn't.
In the professional services accounts, we reached these numbers right away, but when we tested the same application at Itaú, we only got 680 transactions per second and 130 milliseconds latency. So we started talking with the AWS Enterprise Support, and guess what we found? The number and the size of the IAM policies were affecting the DynamoDB performance. We had double the policies than we had at the AWS accounts in the professional services. So we dealt with it, we fixed the roles and the policies of the account, and I thought we could get the same performance that we needed. One important thing here is that I did another test three weeks ago, and now the number of policies is not affecting the performance of DynamoDB anymore. So the DynamoDB service team, listening to the customer and to this case that we opened, helped us to figure out what was happening, and they improved the service for all of you.
So let's have a quick summary of why DynamoDB was the Itaú choice for the database. First of all, it's high availability. There's no downtime to upgrade the version, and it gives the 99.99% availability requirement that Itaú needs. It has the ACID transactions that they need for the balance updates. It gives predictable performance for any table size, and it gives less operational overhead. If you are using DynamoDB in your critical system that needs high performance, be aware of the limits of DynamoDB. Be careful with the hot partitions. This is really important. And if you have two processes trying to change the same record at the same time, you need to avoid the race conditions. Now, here there is a QR code with more best practices on using DynamoDB. And now I will hand over to Edinei again to talk about Itaú's launch strategy.
Dark Launch Strategy: Safely Migrating from Mainframe to Cloud with Shadow Traffic
Thanks, Wilson. Okay, you have your architecture defined as well as your database, and you'll test it a lot, and you'll know that it will work, but you still need a safe method to replace your current system by the new one. So we're going to show how we relied on the dark launch strategy to switch from your current checking account platform to the new one. But first, let's see our target architecture. This is our target architecture. Remember all the products that Wilson mentioned in the beginning? Like instant payment, PIX, debit cards used on the POS, online investments, and everything else that needs to update an account balance. They need to send transaction requests into our Kafka command topic. Then we apply the full table mapping on the router layer. When the authorizer receives the transactions, it applies all the business rules. We use our journal component to ensure replication and durability, and then we send the response to an SQS to decouple the synchronous authorization process from the dispatcher response. And then we send the response to the requester as well as to the other platform services using Kafka topics.
So, okay, building this architecture is just half the job. The other half is to make sure that it behaves exactly as expected. So that's where the dark launch comes in. We're using the dark launch in the shadow traffic mode. In this mode, both architectures are consuming the Kafka command requests. So our current architecture is consuming the requests, processing, and making sure that all the business rules are applied and the customer's account balances are updated correctly. And it generates the official response for the products that are requesting that balance update. So at the same time, the new architecture also processes every transaction one by one, and this allows us to validate transaction by transaction if it behaves exactly as we expected.
So while both systems do not align perfectly, we keep doing this and fixing bugs, implementing missing business rules. And when both systems align perfectly, we start migrating our customers from the current architecture to the new one. So when we reach 100% of our customers being processed on the new architecture, we will be ready to fully retire the current one.
Wilson, can you share our next steps? Sure. Well, as we're entering the final stage of our transformation, and in my opinion, the most challenging one, we complete the adoption for all products into our new platform, a step that will add around 500 million new transactions every month from bad seasons with some peak days processing more than 120 million transactions. We also operate at cell level, bringing great resilience, availability, and deployment safety as we fully transition to a cell-based architecture. And perhaps the most challenging part, we'll finish the development phase and begin to evaluate our business rules in parallel, leveraging generative AI to accelerate the process.
Key Lessons and Next Steps: Collaboration, Testing, and the Road to 2028
After more than 50 years of accumulated rules, many are outdated or no longer relevant to us, so we expect the generative AI to support us in identifying, analyzing, and modernizing them faster than ever before. And who knows, re:Invent 2026, we hope to return here next year to share how our journey has progressed and the new findings we've discovered along the way. So as we look ahead to what's next, it's also a good moment to reflect on what we've learned during the way, because every step in this transformation, every challenge, every experiment tells us something valuable that others can also apply in their own journeys.
If there's one thing we've learned, it's that transformation doesn't happen in isolation. It takes trust, collaboration, and keeping the requirements at the center of your decisions. Building a strong foundation was essential to make speed and scale possible. And never stop testing your ideas, understanding how each AWS service best fits and performs in your specific use case. Remember, complex solutions aren't always the best solutions.
Always think about the trade-offs in operation, observability, and cost. Practice first, simulate, measure, and validate the outputs before going to production. That's the best way to move fast and stay safe. And none of this would be possible without great partners. Great connections make all the difference. These have been our key takeaways: collaboration, experimentation, and shared purpose. That's what drives a real transformation.
Well, when companies share the common cultural principles, it leads to an endorsing partnership from us, GFT, AWS, and Itaú. The customers always come first. Yeah, we saw, and you know, customer obsession is Amazon's first leadership principle, and you saw that happening when the DynamoDB team updated the service during the support case that we opened for them, and all of you will benefit from that.
Yeah, so speaking about continuous improvement, Amazon is giving more than 1,000 free digital courses, lab simulations, and training developed with the service teams. They are also including some select re:Invent launches that you are seeing these days. Scan this QR code and give it a try. And now to wrap up, I hope we've given you good ideas for your own projects. I know that your time is valuable and we are really honored to have you here in this session.
Please answer the survey for us. It's really important for us to check how we are doing. And we will be available outside if you want to talk something more about the architecture, ask questions, or go deep dive into anything. I hope you have a great re:Invent, make connections, learn a lot, and thank you, everyone.
; This article is entirely auto-generated using Amazon Bedrock.





























































Top comments (0)