<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: nadermedhet148</title>
    <description>The latest articles on DEV Community by nadermedhet148 (@nadermedhet148).</description>
    <link>https://dev.to/nadermedhet148</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nadermedhet148"/>
    <language>en</language>
    <item>
      <title>What Actually is Consistent Hashing ?</title>
      <dc:creator>nadermedhet148</dc:creator>
      <pubDate>Sat, 03 Sep 2022 13:06:19 +0000</pubDate>
      <link>https://dev.to/nadermedhet148/what-actually-is-consistent-hashing--54no</link>
      <guid>https://dev.to/nadermedhet148/what-actually-is-consistent-hashing--54no</guid>
      <description>&lt;h1&gt;
  
  
  At the fist what is Hashing ?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BtwoeeSi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tudgth8izleokz6gls8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BtwoeeSi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tudgth8izleokz6gls8h.png" alt="Image description" width="560" height="258"&gt;&lt;/a&gt;&lt;br&gt;
Hashing is a technique or process of mapping keys, and values into the hash table by using a hash function. It is done for faster access to elements. The efficiency of mapping depends on the efficiency of the hash function used.&lt;/p&gt;

&lt;p&gt;Let a hash function H(x) maps the value x at the index x%10 in an Array. For example if the list of values is [11,12,13,14,15] it will be stored at positions {1,2,3,4,5} in the array or Hash table respectively.&lt;/p&gt;




&lt;h1&gt;
  
  
  Scaling Out: Distributed Hashing
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kh3tWw6W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6fmbw0qr3hye6vjliie3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kh3tWw6W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6fmbw0qr3hye6vjliie3.png" alt="Image description" width="880" height="383"&gt;&lt;/a&gt;&lt;br&gt;
Now that we have discussed hashing, we’re ready to look into distributed hashing.&lt;/p&gt;

&lt;p&gt;In some situations, it may be necessary or desirable to split a hash table into several parts, hosted by different servers. One of the main motivations for this is to bypass the memory limitations of using a single computer, allowing for the construction of arbitrarily large hash tables (given enough servers).&lt;/p&gt;

&lt;p&gt;In such a scenario, the objects (and their keys) are distributed among several servers, hence the name like shading database or distributed caching .&lt;/p&gt;

&lt;p&gt;Such setups consist of a pool of caching servers that host many key/value pairs and are used to provide fast access to data originally stored (or computed) elsewhere. For example, to reduce the load on a database server and at the same time improve performance, an application can be designed to first fetch data from the cache servers, and only if it’s not present there—a situation known as cache miss—resort to the database, running the relevant query and caching the results with an appropriate key, so that it can be found next time it’s needed.&lt;/p&gt;

&lt;p&gt;how does distribution take place? What criteria are used to determine which keys to host in which servers?&lt;/p&gt;

&lt;p&gt;The simplest way is to take the hash modulo of the number of servers. That is, server = hash(key) mod N, where N is the size of the pool. To store or retrieve a key, the client first computes the hash, applies a modulo N operation, and uses the resulting index to contact the appropriate server (probably by using a lookup table of IP addresses). Note that the hash function used for key distribution must be the same one across all clients, but it need not be the same one used internally by the caching servers.&lt;/p&gt;




&lt;h1&gt;
  
  
  The Rehashing Problem
&lt;/h1&gt;

&lt;p&gt;This distribution scheme is simple, intuitive, and works fine. That is, until the number of servers changes. What happens if one of the servers crashes or becomes unavailable? Keys need to be redistributed to account for the missing server, of course. The same applies if one or more new servers are added to the pool;keys need to be redistributed to include the new servers. This is true for any distribution scheme, but the problem with our simple modulo distribution is that when the number of servers changes, most hashes modulo N will change, so most keys will need to be moved to a different server. So, even if a single server is removed or added, all keys will likely need to be rehashed into a different server.&lt;/p&gt;

&lt;p&gt;From our previous example, if we removed server C, we’d have to rehash all the keys using hash modulo 2 instead of hash modulo 3.&lt;/p&gt;

&lt;p&gt;Note that all key locations changed, not only the ones from server C.&lt;/p&gt;

&lt;p&gt;In the typical use case we mentioned before (caching), this would mean that, all of a sudden, the keys won’t be found because they won’t yet be present at their new location.&lt;/p&gt;

&lt;p&gt;So, most queries will result in misses, and the original data will likely need retrieving again from the source to be rehashed, thus placing a heavy load on the origin server(s) (typically a database). This may very well degrade performance severely and possibly crash the origin servers.&lt;/p&gt;




&lt;h1&gt;
  
  
  Consistent Hashing
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qQCDZP4U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bm2zpk45rsf98b39cvra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qQCDZP4U--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bm2zpk45rsf98b39cvra.png" alt="Image description" width="560" height="420"&gt;&lt;/a&gt;&lt;br&gt;
The solution of that is &lt;strong&gt;Consistent Hashing&lt;/strong&gt; We need a distribution scheme that does not depend directly on the number of servers, so that, when adding or removing servers, the number of keys that need to be relocated is minimized.consistent hashing is the solution, and was first described in an &lt;a href="https://www.cs.princeton.edu/courses/archive/fall09/cos518/papers/chash.pdf"&gt;academic paper&lt;/a&gt; from 1997.&lt;/p&gt;

&lt;p&gt;Consistent Hashing is a distributed hashing scheme that operates independently of the number of servers or objects in a distributed hash table by assigning them a position on an abstract circle, or hash ring. This allows servers and objects to scale without affecting the overall system.&lt;/p&gt;

&lt;p&gt;Imagine we mapped the hash output range onto the points of a circle. That means that the minimum possible hash value, zero, would correspond to an angle of zero, the maximum possible value (some big integer we’ll call INT_MAX) would correspond to an angle of 2𝝅 radians (or 360 degrees), and all other hash values would linearly fit somewhere in between. So, we could take a key, compute its hash, and find out where it lies on the circle’s edge. Assuming an INT_MAX of 1010 to get the server which we need to get we will compute degree for the value and find which server have the range for this degree.&lt;/p&gt;

&lt;h2&gt;
  
  
  We also can have some operations on our Consistent Hashing
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Adding a node&lt;/strong&gt;: Suppose we have A,B,C and we add a new node D in the ring by calculating the hash. Only those keys will be redistributed whose values lie between the D and C. Now they will not point towards A, they will point towards D and this will avoid rearrange all nodes &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Removing a node&lt;/strong&gt;: Suppose we remove a node C in our ring. Only those keys will be redistributed whose values lies between C and the B. Now they will not point towards C, they will point towards A.&lt;/p&gt;




&lt;h1&gt;
  
  
  What happens if a machine leaves?
&lt;/h1&gt;

&lt;p&gt;With consistent hashing we're assuming that machines can leave and join over time. If a machine leaves, won't we lose data?&lt;br&gt;
To avoid this, we'll usually have machines act as backups for each other. One strategy is to have each machine replicate the data stored on the machine behind of it, giving us a backup copy.&lt;br&gt;
Again, this is the sort of implementation detail you'll want to mention to your interviewer, even if you won't draw out the entire implementation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Consistent Hashing is one of the most important algorithms to help us horizontally scale and manage any distributed system. The algorithm does not only work in shaded systems but also finds its application in load balancing, data partitioning, managing server-based sticky sessions, routing algorithms, and many more. A lot of databases owe their scale, performance, and ability to handle the humongous load to Consistent Hashing.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Caching and Cache Invalidation</title>
      <dc:creator>nadermedhet148</dc:creator>
      <pubDate>Sat, 23 Jul 2022 13:09:13 +0000</pubDate>
      <link>https://dev.to/nadermedhet148/caching-and-cache-invalidation-17fp</link>
      <guid>https://dev.to/nadermedhet148/caching-and-cache-invalidation-17fp</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3yk5a2siubjrtohxcid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3yk5a2siubjrtohxcid.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is caching ?
&lt;/h2&gt;

&lt;p&gt;Caching Data is a process that stores multiple copies of data or files in a temporary storage location—or cache—so they can be accessed faster. It saves data for software applications, servers, and web browsers, which ensures system need not preform the process  every time they access a website or application to speed up site loading.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Caching is Important
&lt;/h2&gt;

&lt;p&gt;Caching is extremely important because it allows developers to achieve performance improvements, sometimes considerably. As mentioned earlier, this is vital.&lt;br&gt;
In particular, neither users nor developers want applications to take a long time to process requests. As developers, we would like to deploy the most performing version of our applications. And as users, we are willing to wait only for a few seconds, and sometimes even milliseconds. The truth is that no one loves wasting their time looking at loading messages.&lt;br&gt;
Plus, the importance of offering high performance is so critical that caching has rapidly become an unavoidable concept in computer technology. This means that more and more services are using it, making it practically omnipresent. As a consequence, if we want to compete with the multitude of applications on the market, we are required to properly implement caching systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;p&gt;Caching is by no means a simple practice, and there are inevitable challenges inherent in the subject. Let’s explore the most insidious ones.&lt;/p&gt;

&lt;h3&gt;
  
  
  Coherence Problem
&lt;/h3&gt;

&lt;p&gt;Since whenever data is cached, a copy is created, there are now two copies of the same data. This means that they can diverge over time. In a few words, this is the coherence problem, which represents the most important and complex issue related to caching. There is not a particular solution that is preferred over another, and the best approach depends on the requirements. Identifying the best cache update or invalidation mechanism is one of the biggest challenges related to caching and perhaps one of the hardest challenges in computer science.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choosing Data to Be Cached
&lt;/h3&gt;

&lt;p&gt;Virtually any kind of data can be cached. This means that choosing what should reside in our cache and what to exclude is open to endless possibilities. Thus, it may become a very complex decision. As tackling this problem, there are some aspects to take into account. First, if we expect data to change often, we should not want to cache it for too long. Otherwise, we may offer users inaccurate data. On the other hand, this also depends on how much time we can tolerate stale data. Second, our cache should always be ready to store frequently required data taking a large amount of time to be generated or retrieved. &lt;/p&gt;

&lt;h3&gt;
  
  
  Dealing with Cache-misses
&lt;/h3&gt;

&lt;p&gt;Cache misses represent the time-based cost of having a cache. In fact, cache misses introducing latencies that would not have been incurred in a system not using caching. So, to benefit from the speed boost deriving from having a cache, cache misses must be kept relatively lows. In particular, they should be low compared to cache hits. Reaching this result is not easy, and if not achieved, our caching system can turn into nothing more than overhead.&lt;/p&gt;




&lt;h2&gt;
  
  
  Types of Caching
&lt;/h2&gt;

&lt;p&gt;Although caching is a general concept, there a few types that stand out from the rest. They represent key concepts for any developers interested in understanding the most common approaches to caching, and they cannot be omitted. Let’s see them all.&lt;/p&gt;

&lt;h3&gt;
  
  
  In-memory Caching
&lt;/h3&gt;

&lt;p&gt;In this approach, cached data is stored directly in RAM, which is assumed to be faster than the typical storing system where the original data is located. The most common implementation of this type of caching is based on key-value databases. They can be seen as sets of key-value pairs. The key is represented by a unique value, while the value by the cached data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database Caching
&lt;/h3&gt;

&lt;p&gt;Each database usually comes with some level of caching. Specifically, an internal cache is generally used to avoid querying a database excessively. By caching the result of the last queries executed, the database can provide the data previously cached immediately. This way, for the period of time that the desired cached data is valid, the database can avoid executing queries. Although each database can implement this differently, the most popular approach is based on using a hash table storing key-value pairs. Just like seen before, the key is used to look up the value. Note that such type of cache is generally provided by default by ORM (Object Relational Mapping) technologies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Web Caching
&lt;/h3&gt;

&lt;p&gt;This can be divided into two further subcategories:&lt;/p&gt;

&lt;h4&gt;
  
  
  Web Client Caching
&lt;/h4&gt;

&lt;p&gt;This type of cache is familiar to most Internet users, and it is stored on clients. Since it is usually part of browsers, it is also called Web Browser Caching. It works in a very intuitive way. The first time a browser loads a web page, it stores the page resources, such as text, images, stylesheets, scripts, and media files. The next time the same page is hit, the browser can look in the cache for resources that were previously cached and retrieve them from the user’s machine. This is generally way faster than download them from the network.&lt;/p&gt;

&lt;h4&gt;
  
  
  Web Server Caching
&lt;/h4&gt;

&lt;p&gt;This is a mechanism aimed at storing resources server-side for reuse. Specifically, such an approach is helpful when dealing with dynamically generated content, which takes time to be created. Conversely, it is not useful in the case of static content. Web server caching avoids servers from getting overloaded, reducing the work to be done, and improves the page delivery speed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Advantages of cache
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;speed
&lt;/li&gt;
&lt;li&gt;less resources used
&lt;/li&gt;
&lt;li&gt;reuse &lt;/li&gt;
&lt;li&gt;being smart&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Disadvantages of cache
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;stale data&lt;/li&gt;
&lt;li&gt;overhead&lt;/li&gt;
&lt;li&gt;complexity&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  stale data
&lt;/h3&gt;

&lt;p&gt;this means that when you use cached content/data you are at risk of presenting old data that's no longer relevant to the new situation. If you've cached a query of products, but in the mean time the product manager has delete four products, the users will get listings to products that don't exists. There's a great deal of complexity in figuring out how to deal with this, but mostly it's about creating hashes/identifiers for caches that mean something to the state of the data in the cache, or business logic that resets the cache (or updates, or appends) with the new data bits. This is a complicated field, and depends very much on your requirements. Then overhead is all the business logic you use to make sure your data is somewhere between being fast and being stale, which lead to complexity, and complexity leads to more code that you need to maintain and understand. You'll easily lose oversight of where data exists in the caching complex, at what level, and how to fix the stale data if you get it. It can easily get out of hand, so instead of doing caching on complex logic you revert to simple timestamps, and just say that a query is cached for a minute or so, and hope for the best (which, admittedly, can be quite effective and not too crazy). You could give your cache life-times (say, it will live X minutes in the cache) vs. access (it will live for 10 requests) vs. timed (it will live until 10pm) and variations thereof. The more variation, the more complexity, of course.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is cache invalidation ?
&lt;/h2&gt;

&lt;p&gt;By definition, a cache doesn’t hold the source of truth of your data (e.g., a database). Cache invalidation describes the process of actively invalidating stale cache entries when data in the source of truth mutates. If a cache invalidation gets mishandled, it can indefinitely leave inconsistent values in the cache that are different from what’s in the source of truth.Cache invalidation involves an action that has to be carried out by something other than the cache itself. Something (e.g., a client or a pub/sub system) needs to tell the cache that a mutation happened. A cache that solely depends on time to live (TTL) to maintain its freshness contains no cache invalidation and, as such, lies out of scope for this discussion. For the rest of this post, we’ll assume the presence of cache invalidation.&lt;/p&gt;




&lt;h1&gt;
  
  
  Caching Strategies
&lt;/h1&gt;

&lt;p&gt;There are two common caching strategies: Write-Through and Cache Aside. I’ll explain how they both work and how a key aspect of cache invalidation is defining boundaries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Write-Through the cache
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yqtxejk9uenih5bpu2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yqtxejk9uenih5bpu2h.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Write-Through caching strategy is about writing to the cache immediately when you make a state change to your primary database.&lt;br&gt;
For example, a client makes an HTTP request to your App Service that is an HTTP API. Our Application calls our primary database and makes some type of state change. In a relational database, this could be an UPDATE/INSERT/DELETE statement, or in a document database, this could be adding an item or updating an item from a collection.&lt;br&gt;
Immediately after within the same process, we update our cache with the latest version that reflects the state change we just made. Again, this is all done within the same process of the initial HTTP request to our App Service.&lt;br&gt;
The benefit of this strategy is that you’re always keeping your cache up-to-date as soon as you make any changes to your primary database. The drawback is since your cache is constantly being updated, you’re caching data that may not be read/accessed very often as time goes on.&lt;br&gt;
A pitfall with this strategy (and others) is that you must run all state changes through your Application or Service. This is because it is the one handling updating your cache. You cannot bypass your App/Service and manually update data directly to the database, otherwise, your cache will be out of sync and not up to date.&lt;br&gt;
This means you cannot have another application or service make any state changes to your database without going through your API. I think most developers are used to using a client tool to manually connect to a database and make some type of data changes. Again, this cannot happen as you will not be updating the cache.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cache Aside (Lazy Loading)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpn30ff9bxhtbged59qmn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpn30ff9bxhtbged59qmn.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Another strategy for caching is called the Cache Aside or Lazy Loading. The way this works is you query the cache for the data you’re looking for. If there is a cache miss, meaning the data isn’t in the cache, then you then query the primary database. After you get the data from the primary database, you then write that data to your cache. Often times you’ll provide an expiry or time-to-live with the cache value.&lt;br&gt;
To illustrate this, we have a request from our client to the application/service. Our App/Service makes a call to our Cache.&lt;br&gt;
If the data is in the cache, we use it. If not, we then query our primary database to get the data we need.&lt;br&gt;
Now that we have the data, we then write that data to our cache so the subsequent request will get the cached value and not have to hit the primary database.&lt;br&gt;
As mentioned, we might set an expiry on the cache so we only cache it for a period of time. once it expires and is automatically purged from the cache, the next client that requests that data will go through this cycle again.&lt;br&gt;
Now the issue with cache invalidation here is that we must update or remove the item from the cache when any write or state change happens to our primary database. To do this, we can leverage an event driven architecture to publish an event when a state change occurs. We can then subscribe (consume) that event to do the invalidation asynchronously.&lt;br&gt;
To illustrate this, when the client makes a call to our Application or Service and we make a state change to our database.Within the same process of the request, we will also publish an event to a message broker.&lt;br&gt;
Now the request is ended with the client, asynchronously in another thread or another process entirely, we can consume that message from the broker.&lt;br&gt;
When we consume that message, we can then call our cache to remove the data. Or we could also call the primary database and update the cache.&lt;br&gt;
Since we’re using the cache aside method, you could simply remove it from the cache, and the next call that tries to get it with a cache miss will get it from the database and write it to the cache.&lt;br&gt;
Just as with the write-through, you cannot bypass your application/service since it is the one publishing the message that will cause the invalidation. Depending on your requirements, if you have defined a short expiry for the cache item, it may be acceptable to do so, but this is entirely dependent on your context.&lt;/p&gt;

&lt;h1&gt;
  
  
  Related Links
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.cloudflare.com/learning/cdn/what-is-caching/" rel="noopener noreferrer"&gt;caching at cdn&lt;/a&gt;&lt;br&gt;
&lt;a href="https://bluzelle.com/blog/things-you-should-know-about-database-caching" rel="noopener noreferrer"&gt;database caching&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.geeksforgeeks.org/caching-system-design-concept-for-beginners/" rel="noopener noreferrer"&gt;Caching – System Design Concept For Beginners&lt;/a&gt;&lt;/p&gt;

</description>
      <category>systems</category>
      <category>backend</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
