<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ilyes Houdjedje</title>
    <description>The latest articles on DEV Community by Ilyes Houdjedje (@ihoudjedje).</description>
    <link>https://dev.to/ihoudjedje</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ihoudjedje"/>
    <language>en</language>
    <item>
      <title>Scaling Up To 1M Users In GCP &amp; AWS (2/2)</title>
      <dc:creator>Ilyes Houdjedje</dc:creator>
      <pubDate>Wed, 11 Nov 2020 18:10:59 +0000</pubDate>
      <link>https://dev.to/ihoudjedje/scaling-up-to-1m-users-in-gcp-aws-2-3-563c</link>
      <guid>https://dev.to/ihoudjedje/scaling-up-to-1m-users-in-gcp-aws-2-3-563c</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As we've seen in &lt;a href="https://dev.to/ihoudjedje/theory-of-scaling-up-to-10m-users-in-gcp-aws-1-4-4oh2"&gt;the first part&lt;/a&gt; of this series, we managed to adapt our infrastructure and go from 1 user up to 100k.&lt;/p&gt;

&lt;p&gt;However, once we cross that number, it's expected to see some shortage and rising issues, and for that, we will need to adjust accordingly. Let's have a look at these failures and how we are going to address that.  &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Stage III: &amp;gt;100k Users&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Issues of "Stage II"
&lt;/h3&gt;

&lt;p&gt;So looking back at the current design that we have, the common three-tier application where everything is scalable and highly resilient. Therefore, going to 100K has created a new issue in our architecture, specifically in the data layer.&lt;/p&gt;

&lt;p&gt;Since we're managing manually our installed PostgreSQL database on VMs in the Compute Engine, this becomes an unfriendly scalable solution. As the number of users grows, the number of concurrent users grows at the same time, and so the number of concurrent reads and writes to our VMs will do as well.&lt;/p&gt;

&lt;p&gt;There are a couple of ways to improve this. One, maybe we can increase the disk on our current VM which will increase &lt;a href="https://en.wikipedia.org/wiki/IOPS"&gt;IOPS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Two, we can add another re-replica that we mentioned in &lt;a href="https://dev.to/ihoudjedje/theory-of-scaling-up-to-10m-users-in-gcp-aws-1-4-4oh2"&gt;Part 1&lt;/a&gt;, but this is a lot of overhead. We need to install VMs, manage the VMs, and manage the PostgreSQL databases correspondingly. So we want to take it as a Cloud approach as possible, and thus, we can offload our database loading using Redis cache for example.&lt;/p&gt;

&lt;p&gt;Now, let's get back to our database and storage issue. As you may know, there are actually a variety of database and storage services in GCP and AWS that you can adapt based on your use cases and scenarios. Let's have a look at the following capture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffdlwq6o8x59hpe7koumd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Ffdlwq6o8x59hpe7koumd.png" alt="Alt Text" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For database options, for instance, we have a choice of Relational DB, NoSQL DB, and the Warehouse as well. If we have a typical transaction to process, the Relational database will be a good choice for us. Here, in our case, we will focus on the in-memory database and the Relational database as we need to at this stage.&lt;/p&gt;

&lt;p&gt;Let's have a look again at our data layer in our current architecture under AWS as an example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw8wkgfgdb5w7r1a767o1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw8wkgfgdb5w7r1a767o1.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking at the current design, as we are running PostgreSQL on EC2, we know that database engines are a fast-evolving technology, and thus, new updates being released frequently so that we need to update the software appropriately. In the meantime, as our users grow, the disk of the database grows as well. So there is no easy way for us to extend the disk space for our database, and there is no managed re-replica to separate the read and write operations which can help us reduce the overhead of the master database.&lt;/p&gt;

&lt;h3&gt;
  
  
  GCP Cloud SQL &amp;amp; AWS RDS
&lt;/h3&gt;

&lt;p&gt;With a managed service like Cloud SQL and RDS, we can smoothly upgrade our database in the maintenance window that we specified. When we need to extend our database disk, a managed service can increase the disk when it's running out. We can even define the threshold that we want to trigger an automatic increase mechanism. And this disk increase operation doesn't affect our data at all, and even we don't need to stop the service as it can be done on the fly.&lt;/p&gt;

&lt;p&gt;In addition to that, a managed service can also periodically create a snapshot of the database so that we can restore it from this managed backup, and also, provide the failover replica and the re-replica to help easily scale the database. So failover replica gives both the re-ability and also high availability.&lt;/p&gt;

&lt;p&gt;To sum this up, we can list it as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Current Solution&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Not managed database&lt;/li&gt;
&lt;li&gt;Not easy to extend disk space&lt;/li&gt;
&lt;li&gt;Not scalable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Using Cloud SQL or RDS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Auto software upgrade&lt;/li&gt;
&lt;li&gt;Easy to extend disk space (vertically scalable)&lt;/li&gt;
&lt;li&gt;Automatic storage increase&lt;/li&gt;
&lt;li&gt;Managed backups&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Memorystore for Redis
&lt;/h3&gt;

&lt;p&gt;Memorystore is also an important solution to route the read and write traffic from the database to the cache. Memorystore for Redis is used to handle fast read and write operations. We can keep our frequently accessed data in the cache, like user's session information, game leaderboard metadata, or shopping card items for instance. With this latter, we'll gain both low latency access to data and a significant database offload loading.&lt;/p&gt;

&lt;p&gt;Furthermore, we have the ability to increase memory on the fly without stopping the cache services, and we can also provide a standby mode to provide high availability when the primary instance is down or in case of a zone failure.&lt;/p&gt;

&lt;p&gt;Now let's see how we can improve the architecture and achieve that on GCP and AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn8ouwtq2r55mpqsgawvs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fn8ouwtq2r55mpqsgawvs.png" alt="Alt Text" width="800" height="490"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyzgfdz5ak2k603gyjuzq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyzgfdz5ak2k603gyjuzq.png" alt="Alt Text" width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here as you can see, we have added two types of databases into the solution. We adapt the fully managed Relational database service, to easily create the failover replica and re-replica. Memorystore for Redis is also added between the backend and the data layer. At this stage, the design focuses more on &lt;strong&gt;services rather than servers&lt;/strong&gt;. In doing so, we address the database issues, the scalability, and the high availability with a managed Relational database service and Memorystore.&lt;/p&gt;

&lt;p&gt;That's it for Part 2, in the last one, we're going to see the cons of our current design and how we can improve it to scale up to 1M happy users!&lt;/p&gt;

&lt;p&gt;Thanks for reading up until here, please feel free to leave a comment or reach me out, and even suggest modifications if possible. I highly appreciate that!&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>architecture</category>
      <category>gcp</category>
      <category>aws</category>
    </item>
    <item>
      <title>Scaling Up To 1M Users In GCP &amp; AWS (1/2)</title>
      <dc:creator>Ilyes Houdjedje</dc:creator>
      <pubDate>Thu, 29 Oct 2020 12:55:43 +0000</pubDate>
      <link>https://dev.to/ihoudjedje/theory-of-scaling-up-to-10m-users-in-gcp-aws-1-4-4oh2</link>
      <guid>https://dev.to/ihoudjedje/theory-of-scaling-up-to-10m-users-in-gcp-aws-1-4-4oh2</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Today, the cloud has opened up countless solutions and enabled us to think beyond what we were thinking impossible. Looking back to 20 years ago, and exactly how PayPal launched, and the way they hosted their website on a development local machine by day while working on it by night, is one of the fascinating examples of how far we've gone today thanks to the cloud.&lt;/p&gt;

&lt;p&gt;Therefore, let's see how we can scale up an application from 1 user, which is probably you, the developer, all the way up to 1 million users. And compare that between the two most popular cloud providers today: Google Cloud Platform and Amazon Web Services.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Stage I: "You" to 1k Users&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let's imagine that you have just finished developing this amazing app called "ScaleUp" in your local machine, and you have got some friends and family testing it for a while now, and it seems stable to host it somewhere in the cloud, think of it as making your own computer access to everyone on the globe!&lt;/p&gt;

&lt;p&gt;Google Cloud infrastructure is available in 24 regions, exactly as Amazon's in Q1 of 2020. And thus, you want to deploy your code and services to the nearest region of your users. Let's assume it to be in Europe, and exactly in "London".&lt;/p&gt;

&lt;h4&gt;
  
  
  Solution Design
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvkrkk28tt44w1h675ela.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fvkrkk28tt44w1h675ela.png" alt="Alt Text" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After hosting our beautiful web app in our chosen region in a VM instance that runs a web server and a PostgreSQL database, we can then assign a domain name to resolve our VM's external IP address and attach traffic to it. The total setup becomes as follows:&lt;/p&gt;

&lt;h4&gt;
  
  
  Solution Setup
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;1 Web Server

&lt;ul&gt;
&lt;li&gt;[GCP] Compute Engine&lt;/li&gt;
&lt;li&gt;[AWS] Elastic Compute Cloud (EC2)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Running Web App&lt;/li&gt;
&lt;li&gt;Running PostgreSQL DB&lt;/li&gt;
&lt;li&gt;DNS for VM's External IP

&lt;ul&gt;
&lt;li&gt;[GCP] Cloud DNS&lt;/li&gt;
&lt;li&gt;[AWS] Route 53&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Stage II: &amp;gt;10k Users&lt;/strong&gt;
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Issues of "Stage I"
&lt;/h4&gt;

&lt;p&gt;Now that we moved into stage two, and I have around 10K users, I will start to experience some odd behavior and performance issues. I can summarize that into three types of issues:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Single Instance Failure&lt;/strong&gt;&lt;br&gt;
Since I only have a single VM and there might be different types of errors that occur on that single instance, think of it as VM issues, web service issues, or database issues.&lt;/p&gt;

&lt;p&gt;Whenever that happens, I need to either restart my services or reboot my VM. What that really means is that there is downtime to my whole application. This is the cause of a single instance failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Single Zone Failure&lt;/strong&gt;&lt;br&gt;
Imagine that my VM is hosted in zone "A" of the "London" region, and for some unlikely reason something happens to that zone, and I don't have any backup instances in another zone. This will, again, cause my whole application downtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Scalability Issues&lt;/strong&gt;&lt;br&gt;
So as my application grows, I have more users concurrently accessing my application, which might cause some inconsistent behaviors.&lt;br&gt;
That really can be avoided since I don't have any monitoring or auto-healing or anything within my single VM right now.&lt;/p&gt;

&lt;p&gt;So based on these issues, we can observe that my current architecture is really flawed with a single point of failure, meaning one failure will cause my whole system to go down.&lt;/p&gt;

&lt;p&gt;Based on all of that, what we really need is to build a scalable and resilient solution. &lt;em&gt;Scalable&lt;/em&gt; simply means that my application needs to work with one user all the way up to 100 million users. &lt;em&gt;Resilient&lt;/em&gt; on the other hand means that my application needs to be highly available. It needs to continue to function when an unexpected failure happens.&lt;/p&gt;

&lt;p&gt;Now let's see how we can do that on GCP and AWS.&lt;/p&gt;

&lt;h4&gt;
  
  
  Solution Design
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmex02orqpmyev2jlx9d7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmex02orqpmyev2jlx9d7.png" alt="Alt Text" width="800" height="442"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw8wkgfgdb5w7r1a767o1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fw8wkgfgdb5w7r1a767o1.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Solution Setup
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Frontend &amp;amp; Backend

&lt;ul&gt;
&lt;li&gt;[GCP] Managed Instance Group&lt;/li&gt;
&lt;li&gt;[AWS] Unified Instance Group &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Multi-zone deployment&lt;/li&gt;
&lt;li&gt;External IP on Network LB

&lt;ul&gt;
&lt;li&gt;[GCP] Cloud Load Balancer&lt;/li&gt;
&lt;li&gt;[AWS] Elastic Load Balancer&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;PostgreSQL server in the Data layer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As the application grows, the number of VM instances grows as well. So we can clearly find that it is getting difficult, but still manageable though.&lt;/p&gt;

&lt;p&gt;Now, what if you need to scale over tens of VM instances? And when one of them goes down, you have to take action, right?&lt;br&gt;
Here Managed/Uniform Instance Group comes into the solution to assist with auto-scaling and auto-healing.&lt;/p&gt;

&lt;p&gt;MIG/UIG contains a single base instance template, When the service needs to scale out for a new instance, it can create it from this base template.&lt;/p&gt;

&lt;p&gt;In addition, MIG/UIG has three major features:&lt;br&gt;
&lt;strong&gt;1. Auto-Scaling&lt;/strong&gt;&lt;br&gt;
Autoscaler can create and destroy VM instances based on the loading, which is defined by yourself.&lt;br&gt;
For example, you can use the network throughput, CPU usage, and memory usage as the scale in metric. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Auto-Healing&lt;/strong&gt;&lt;br&gt;
Auto-healer will recreate the instance which becomes unhealthy. Unhealthy can be also defined by yourself with the custom health check.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Regional Distribution&lt;/strong&gt;&lt;br&gt;
You can distribute all of the VM instances across multiple zones which makes your solution more highly available.&lt;/p&gt;

&lt;p&gt;Looking at these three separate features, these actually solve the previous issues that we had, &lt;em&gt;Scalability&lt;/em&gt; and &lt;em&gt;High Availability&lt;/em&gt;. With &lt;em&gt;Auto-Scaling&lt;/em&gt;, we can make sure that your application is always scaling to the current zone.&lt;/p&gt;

&lt;p&gt;With &lt;em&gt;Auto-Healing&lt;/em&gt; and the &lt;em&gt;Regional-Distribution&lt;/em&gt; your application can recreate unhealthy instances and avoid single zone failure.&lt;/p&gt;

&lt;p&gt;Now that we have MIG/UIG config, we need to distribute your traffic across all the VM instances within the group. This is where the load balancer comes into play. Once you deploy your instances across multiple zones, the Load balancer will spread the traffic across all the VM instances within the group, and there is a single IP attached to this network load balancer.&lt;/p&gt;

&lt;p&gt;In previous architecture, you can see that we run all of the services on the same VM instance. This is half the scale in point of view, and also harmful to service availability.&lt;/p&gt;

&lt;p&gt;Imagine that one instance is down which costs the entire service running on the same instance to be down as well (think of the DB for instance). Hence, this is why architecture evolves, we decouple the application into the typical three-tier web applications, The frontend, the backend, and the third base layer.&lt;/p&gt;

&lt;p&gt;MIG/UIG is configured to both frontend and backend services to provide scalability and high availability. At each layer, we deployed a VM instance across multiple zones to cope with the single zone failure.&lt;/p&gt;

&lt;p&gt;At the frontend, we have the Cloud load balancer distribute the traffic to the frontend, and we use the internal load balancer to spread the traffic to the backend service as we deploy our backend service into the private network.&lt;/p&gt;

&lt;p&gt;And the third base layer, PostgreSQL is running on CE/EC2 with the HA (High-Availability) mode. This provides the master to slave replication for High-Availability purposes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Troubleshooting
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuqfw1l8732yy5bt9xt3d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fuqfw1l8732yy5bt9xt3d.png" alt="Alt Text" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Monitoring and logging easily help you troubleshoot your system when something goes wrong.&lt;/p&gt;

&lt;p&gt;"Stackdriver" for GCP and "CloudWatch" for AWS, is a comprehensive service that helps you monitor and make alert policies in abnormal situations.&lt;/p&gt;

&lt;p&gt;In addition, "Stackdriver" and "CloudWatch" debugger can help you to debug your service online without interrupting your service.&lt;/p&gt;

&lt;p&gt;So in this stage, we address scalability and resilience through the MIG/UIG and load balancer. Application is also decoupled into three tiers to avoid interruption between services.&lt;/p&gt;

&lt;p&gt;That's it for today, in the next part, we are going to see together the limits of this solution, and how we can scale up to &lt;strong&gt;100K users&lt;/strong&gt;, and how we can adjust our architecture.&lt;/p&gt;

&lt;p&gt;Thanks for reading up until here, please feel free to comment on your thoughts or reach me out, and even suggest modifications. I really appreciate that!&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>architecture</category>
      <category>gcp</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
