<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karthik Jasthi</title>
    <description>The latest articles on DEV Community by Karthik Jasthi (@karthikjasthi).</description>
    <link>https://dev.to/karthikjasthi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/karthikjasthi"/>
    <language>en</language>
    <item>
      <title>Why In Memory storage faster than Disk Storage</title>
      <dc:creator>Karthik Jasthi</dc:creator>
      <pubDate>Sun, 08 May 2022 18:48:19 +0000</pubDate>
      <link>https://dev.to/karthikjasthi/why-in-memory-storage-faster-than-disk-storage-4m78</link>
      <guid>https://dev.to/karthikjasthi/why-in-memory-storage-faster-than-disk-storage-4m78</guid>
      <description>&lt;p&gt;&lt;strong&gt;The easy version:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;HDD is relatively far away from the CPU, connected to system logic board via a somewhat slow SATA connection. SATA 6GB/s is the current standard. HDD's can only read or write but not both at the same time.​&lt;/p&gt;

&lt;p&gt;RAM sits very close to the CPU and has a very high bandwidth connection. DDR4 has a throughput of ~40GB/s some graphs RAM's can read/read, read/write and write/write at the same time (dual channel). RAM's can also send very large data at once (double data).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The technical version:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;HDD's have spinning platters, and a magnetic read/write head needs to reposition every time you want to read/write something. It needs to find the correct places on each platter. HDD's platters are spinning at 5400 rpm, 7200 rpm or 10000 rpm. The magnetic read/write head sits on each side of a platter. Typically an HDD has three platter and six read/write heads. But those heads are fixed to the movement of just one torque engine. Depending on the layout of a filesystem the read/write head can't constantly read from near positions.&lt;/p&gt;

&lt;p&gt;RAM memory typically modules consists 4 or 8 memory modules. Each module is connected via dual-channel and read/read, read/write or write/write at the same time. At best a RAM memory with eight modules can 16 times read data at the same time or write 16 times at the same time. Each memory module has its layout manager; that knows exactly where the data is. Reading data is only affected by something called CAS timing. For example a DDR4 RAM clocks at 1800 Mhz (1,800 billion clock cycles / second ) and has a CAS timing of 34; means that the RAM module just needs 34 clock cycles to be ready.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wmAcOhSt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cuk87kp3359yq8a9c1in.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wmAcOhSt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cuk87kp3359yq8a9c1in.png" alt="Image description" width="648" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cache</category>
      <category>ram</category>
      <category>redis</category>
    </item>
    <item>
      <title>What is Consistent Hashing?</title>
      <dc:creator>Karthik Jasthi</dc:creator>
      <pubDate>Sun, 23 Feb 2020 17:29:11 +0000</pubDate>
      <link>https://dev.to/karthikjasthi/what-is-consistent-hashing-1i2m</link>
      <guid>https://dev.to/karthikjasthi/what-is-consistent-hashing-1i2m</guid>
      <description>&lt;p&gt;In recent years, with the advent of concepts such as cloud computing and big data, distributed systems have gained popularity and relevance.&lt;/p&gt;

&lt;p&gt;One such type of system, distributed caches that power many high-traffic dynamic websites and web applications, typically consist of a particular case of distributed hashing. These take advantage of an algorithm known as consistent hashing.&lt;/p&gt;

&lt;p&gt;What is consistent hashing? What’s the motivation behind it, and why should you care?&lt;/p&gt;

&lt;p&gt;In this article, I’ll first review the general concept of hashing and its purpose, followed by a description of distributed hashing and the problems it entails. In turn, that will lead us to our title subject.&lt;/p&gt;

&lt;p&gt;What Is Hashing?&lt;br&gt;
What is “hashing” all about? Merriam-Webster defines the noun hash as “chopped meat mixed with potatoes and browned,” and the verb as “to chop (as meat and potatoes) into small pieces.” So, culinary details aside, hash roughly means “chop and mix”—and that’s precisely where the technical term comes from.&lt;/p&gt;

&lt;p&gt;A hash function is a function that maps one piece of data—typically describing some kind of object, often of arbitrary size—to another piece of data, typically an integer, known as hash code, or simply hash.&lt;/p&gt;

&lt;p&gt;For instance, some hash function designed to hash strings, with an output range of 0 .. 100, may map the string Hello to, say, the number 57, Hasta la vista, baby to the number 33, and any other possible string to some number within that range. Since there are way more possible inputs than outputs, any given number will have many different strings mapped to it, a phenomenon known as collision. Good hash functions should somehow “chop and mix” (hence the term) the input data in such a way that the outputs for different input values are spread as evenly as possible over the output range.&lt;/p&gt;

&lt;p&gt;Hash functions have many uses and for each one, different properties may be desired. There is a type of hash function known as cryptographic hash functions, which must meet a restrictive set of properties and are used for security purposes—including applications such as password protection, integrity checking and fingerprinting of messages, and data corruption detection, among others, but those are outside the scope of this article.&lt;/p&gt;

&lt;p&gt;Non-cryptographic hash functions have several uses as well, the most common being their use in hash tables, which is the one that concerns us and which we’ll explore in more detail.&lt;/p&gt;

&lt;p&gt;Introducing Hash Tables (Hash Maps)&lt;br&gt;
Imagine we needed to keep a list of all the members of some club while being able to search for any specific member. We could handle it by keeping the list in an array (or linked list) and, to perform a search, iterate the elements until we find the desired one (we might be searching based on their name, for instance). In the worst case, that would mean checking all members (if the one we’re searching for is last, or not present at all), or half of them on average. In complexity theory terms, the search would then have complexity O(n), and it would be reasonably fast for a small list, but it would get slower and slower in direct proportion to the number of members.&lt;/p&gt;

&lt;p&gt;How could that be improved? Let’s suppose all these club members had a member ID, which happened to be a sequential number reflecting the order in which they joined the club.&lt;/p&gt;

&lt;p&gt;Assuming that searching by ID were acceptable, we could place all members in an array, with their indexes matching their IDs (for example, a member with ID=10 would be at the index 10 in the array). This would allow us to access each member directly, with no search at all. That would be very efficient, in fact, as efficient as it can possibly be, corresponding to the lowest complexity possible, O(1), also known as constant time.&lt;/p&gt;

&lt;p&gt;But, admittedly, our club member ID scenario is somewhat contrived. What if IDs were big, non-sequential or random numbers? Or, if searching by ID were not acceptable, and we needed to search by name (or some other field) instead? It would certainly be useful to keep our fast direct access (or something close) while at the same time being able to handle arbitrary datasets and less restrictive search criteria.&lt;/p&gt;

&lt;p&gt;Here’s where hash functions come to the rescue. A suitable hash function can be used to map an arbitrary piece of data to an integer, which will play a similar role to that of our club member ID, albeit with a few important differences.&lt;/p&gt;

&lt;p&gt;First, a good hash function generally has a wide output range (typically, the whole range of a 32 or 64-bit integer), so building an array for all possible indices would be either impractical or plain impossible, and a colossal waste of memory. To overcome that, we can have a reasonably sized array (say, just twice the number of elements we expect to store) and perform a modulo operation on the hash to get the array index. So, the index would be index = hash(object) mod N, where N is the size of the array.&lt;/p&gt;

&lt;p&gt;Second, object hashes will not be unique (unless we’re working with a fixed dataset and a custom-built perfect hash function, but we won’t discuss that here). There will be collisions (further increased by the modulo operation), and therefore a simple direct index access won’t work. There are several ways to handle this, but a typical one is to attach a list, commonly known as a bucket, to each array index to hold all the objects sharing a given index.&lt;/p&gt;

&lt;p&gt;So, we have an array of size N, with each entry pointing to an object bucket. To add a new object, we need to calculate its hash modulo N, and check the bucket at the resulting index, adding the object if it’s not already there. To search for an object, we do the same, just looking into the bucket to check if the object is there. Such a structure is called a hash table, and although the searches within buckets are linear, a properly sized hash table should have a reasonably small number of objects per bucket, resulting in almost constant time access (an average complexity of O(N/k), where k is the number of buckets).&lt;/p&gt;

&lt;p&gt;With complex objects, the hash function is typically not applied to the whole object, but to a key instead. In our club member example, each object might contain several fields (like name, age, address, email, phone), but we could pick, say, the email to act as the key so that the hash function would be applied to the email only. In fact, the key need not be part of the object; it is common to store key/value pairs, where the key is usually a relatively short string, and the value can be an arbitrary piece of data. In such cases, the hash table or hash map is used as a dictionary, and that’s the way some high-level languages implement objects or associative arrays.&lt;/p&gt;

&lt;p&gt;Scaling Out: Distributed Hashing&lt;br&gt;
Now that we have discussed hashing, we’re ready to look into distributed hashing.&lt;/p&gt;

&lt;p&gt;In some situations, it may be necessary or desirable to split a hash table into several parts, hosted by different servers. One of the main motivations for this is to bypass the memory limitations of using a single computer, allowing for the construction of arbitrarily large hash tables (given enough servers).&lt;/p&gt;

&lt;p&gt;In such a scenario, the objects (and their keys) are distributed among several servers, hence the name.&lt;/p&gt;

&lt;p&gt;A typical use case for this is the implementation of in-memory caches, such as Memcached.&lt;/p&gt;

&lt;p&gt;Such setups consist of a pool of caching servers that host many key/value pairs and are used to provide fast access to data originally stored (or computed) elsewhere. For example, to reduce the load on a database server and at the same time improve performance, an application can be designed to first fetch data from the cache servers, and only if it’s not present there—a situation known as cache miss—resort to the database, running the relevant query and caching the results with an appropriate key, so that it can be found next time it’s needed.&lt;/p&gt;

&lt;p&gt;Now, how does distribution take place? What criteria are used to determine which keys to host in which servers?&lt;/p&gt;

&lt;p&gt;The simplest way is to take the hash modulo of the number of servers. That is, server = hash(key) mod N, where N is the size of the pool. To store or retrieve a key, the client first computes the hash, applies a modulo N operation, and uses the resulting index to contact the appropriate server (probably by using a lookup table of IP addresses). Note that the hash function used for key distribution must be the same one across all clients, but it need not be the same one used internally by the caching servers.&lt;/p&gt;

&lt;p&gt;Let’s see an example. Say we have three servers, A, B and C, and we have some string keys with their hashes:&lt;/p&gt;

&lt;p&gt;KEY HASH    HASH mod 3&lt;br&gt;
"john"  1633428562  2&lt;br&gt;
"bill"  7594634739  0&lt;br&gt;
"jane"  5000799124  1&lt;br&gt;
"steve" 9787173343  0&lt;br&gt;
"kate"  3421657995  2&lt;br&gt;
A client wants to retrieve the value for key john. Its hash modulo 3 is 2, so it must contact server C. The key is not found there, so the client fetches the data from the source and adds it. The pool looks like this:&lt;/p&gt;

&lt;p&gt;A   B   C&lt;br&gt;
"john"&lt;br&gt;
Next another client (or the same one) wants to retrieve the value for key bill. Its hash modulo 3 is 0, so it must contact server A. The key is not found there, so the client fetches the data from the source and adds it. The pool looks like this now:&lt;/p&gt;

&lt;p&gt;A   B   C&lt;br&gt;
"bill"      "john"&lt;br&gt;
After the remaining keys are added, the pool looks like this:&lt;/p&gt;

&lt;p&gt;A   B   C&lt;br&gt;
"bill"  "jane"  "john"&lt;br&gt;
"steve"     "kate"&lt;br&gt;
The Rehashing Problem&lt;br&gt;
This distribution scheme is simple, intuitive, and works fine. That is, until the number of servers changes. What happens if one of the servers crashes or becomes unavailable? Keys need to be redistributed to account for the missing server, of course. The same applies if one or more new servers are added to the pool;keys need to be redistributed to include the new servers. This is true for any distribution scheme, but the problem with our simple modulo distribution is that when the number of servers changes, most hashes modulo N will change, so most keys will need to be moved to a different server. So, even if a single server is removed or added, all keys will likely need to be rehashed into a different server.&lt;/p&gt;

&lt;p&gt;From our previous example, if we removed server C, we’d have to rehash all the keys using hash modulo 2 instead of hash modulo 3, and the new locations for the keys would become:&lt;/p&gt;

&lt;p&gt;KEY HASH    HASH mod 2&lt;br&gt;
"john"  1633428562  0&lt;br&gt;
"bill"  7594634739  1&lt;br&gt;
"jane"  5000799124  0&lt;br&gt;
"steve" 9787173343  1&lt;br&gt;
"kate"  3421657995  1&lt;br&gt;
A   B&lt;br&gt;
"john"  "bill"&lt;br&gt;
"jane"  "steve"&lt;br&gt;
"kate"&lt;br&gt;
Note that all key locations changed, not only the ones from server C.&lt;/p&gt;

&lt;p&gt;In the typical use case we mentioned before (caching), this would mean that, all of a sudden, the keys won’t be found because they won’t yet be present at their new location.&lt;/p&gt;

&lt;p&gt;So, most queries will result in misses, and the original data will likely need retrieving again from the source to be rehashed, thus placing a heavy load on the origin server(s) (typically a database). This may very well degrade performance severely and possibly crash the origin servers.&lt;/p&gt;

&lt;p&gt;The Solution: Consistent Hashing&lt;br&gt;
So, how can this problem be solved? We need a distribution scheme that does not depend directly on the number of servers, so that, when adding or removing servers, the number of keys that need to be relocated is minimized. One such scheme—a clever, yet surprisingly simple one—is called consistent hashing, and was first described by Karger et al. at MIT in an academic paper from 1997 (according to Wikipedia).&lt;/p&gt;

&lt;p&gt;Consistent Hashing is a distributed hashing scheme that operates independently of the number of servers or objects in a distributed hash table by assigning them a position on an abstract circle, or hash ring. This allows servers and objects to scale without affecting the overall system.&lt;/p&gt;

&lt;p&gt;Imagine we mapped the hash output range onto the edge of a circle. That means that the minimum possible hash value, zero, would correspond to an angle of zero, the maximum possible value (some big integer we’ll call INT_MAX) would correspond to an angle of 2𝝅 radians (or 360 degrees), and all other hash values would linearly fit somewhere in between. So, we could take a key, compute its hash, and find out where it lies on the circle’s edge. Assuming an INT_MAX of 1010 (for example’s sake), the keys from our previous example would look like this:&lt;/p&gt;

&lt;p&gt;Consistent Hashing Example: Keys&lt;/p&gt;

&lt;p&gt;KEY HASH    ANGLE (DEG)&lt;br&gt;
"john"  1633428562  58.8&lt;br&gt;
"bill"  7594634739  273.4&lt;br&gt;
"jane"  5000799124  180&lt;br&gt;
"steve" 9787173343  352.3&lt;br&gt;
"kate"  3421657995  123.2&lt;br&gt;
Now imagine we also placed the servers on the edge of the circle, by pseudo-randomly assigning them angles too. This should be done in a repeatable way (or at least in such a way that all clients agree on the servers’ angles). A convenient way of doing this is by hashing the server name (or IP address, or some ID)—as we’d do with any other key—to come up with its angle.&lt;/p&gt;

&lt;p&gt;In our example, things might look like this:&lt;/p&gt;

&lt;p&gt;Consistent Hashing Example: Servers&lt;/p&gt;

&lt;p&gt;KEY HASH    ANGLE (DEG)&lt;br&gt;
"john"  1633428562  58.8&lt;br&gt;
"bill"  7594634739  273.4&lt;br&gt;
"jane"  5000799124  180&lt;br&gt;
"steve" 9787173343  352.3&lt;br&gt;
"kate"  3421657995  123.2&lt;br&gt;
"A" 5572014558  200.6&lt;br&gt;
"B" 8077113362  290.8&lt;br&gt;
"C" 2269549488  81.7&lt;br&gt;
Since we have the keys for both the objects and the servers on the same circle, we may define a simple rule to associate the former with the latter: Each object key will belong in the server whose key is closest, in a counterclockwise direction (or clockwise, depending on the conventions used). In other words, to find out which server to ask for a given key, we need to locate the key on the circle and move in the ascending angle direction until we find a server.&lt;/p&gt;

&lt;p&gt;In our example:&lt;/p&gt;

&lt;p&gt;Consistent Hashing Example: Objects&lt;/p&gt;

&lt;p&gt;KEY HASH    ANGLE (DEG)&lt;br&gt;
"john"  1633428562  58.7&lt;br&gt;
"C" 2269549488  81.7&lt;br&gt;
"kate"  3421657995  123.1&lt;br&gt;
"jane"  5000799124  180&lt;br&gt;
"A" 5572014557  200.5&lt;br&gt;
"bill"  7594634739  273.4&lt;br&gt;
"B" 8077113361  290.7&lt;br&gt;
"steve" 787173343   352.3&lt;br&gt;
KEY HASH    ANGLE (DEG) LABEL   SERVER&lt;br&gt;
"john"  1632929716  58.7    "C" C&lt;br&gt;
"kate"  3421831276  123.1   "A" A&lt;br&gt;
"jane"  5000648311  180 "A" A&lt;br&gt;
"bill"  7594873884  273.4   "B" B&lt;br&gt;
"steve" 9786437450  352.3   "C" C&lt;br&gt;
From a programming perspective, what we would do is keep a sorted list of server values (which could be angles or numbers in any real interval), and walk this list (or use a binary search) to find the first server with a value greater than, or equal to, that of the desired key. If no such value is found, we need to wrap around, taking the first one from the list.&lt;/p&gt;

&lt;p&gt;To ensure object keys are evenly distributed among servers, we need to apply a simple trick: To assign not one, but many labels (angles) to each server. So instead of having labels A, B and C, we could have, say, A0 .. A9, B0 .. B9 and C0 .. C9, all interspersed along the circle. The factor by which to increase the number of labels (server keys), known as weight, depends on the situation (and may even be different for each server) to adjust the probability of keys ending up on each. For example, if server B were twice as powerful as the rest, it could be assigned twice as many labels, and as a result, it would end up holding twice as many objects (on average).&lt;/p&gt;

&lt;p&gt;For our example we’ll assume all three servers have an equal weight of 10 (this works well for three servers, for 10 to 50 servers, a weight in the range 100 to 500 would work better, and bigger pools may need even higher weights):&lt;/p&gt;

&lt;p&gt;Content Hashing Example 5&lt;/p&gt;

&lt;p&gt;KEY HASH    ANGLE (DEG)&lt;br&gt;
"C6"    408965526   14.7&lt;br&gt;
"A1"    473914830   17&lt;br&gt;
"A2"    548798874   19.7&lt;br&gt;
"A3"    1466730567  52.8&lt;br&gt;
"C4"    1493080938  53.7&lt;br&gt;
"john"  1633428562  58.7&lt;br&gt;
"B2"    1808009038  65&lt;br&gt;
"C0"    1982701318  71.3&lt;br&gt;
"B3"    2058758486  74.1&lt;br&gt;
"A7"    2162578920  77.8&lt;br&gt;
"B4"    2660265921  95.7&lt;br&gt;
"C9"    3359725419  120.9&lt;br&gt;
"kate"  3421657995  123.1&lt;br&gt;
"A5"    3434972143  123.6&lt;br&gt;
"C1"    3672205973  132.1&lt;br&gt;
"C8"    3750588567  135&lt;br&gt;
"B0"    4049028775  145.7&lt;br&gt;
"B8"    4755525684  171.1&lt;br&gt;
"A9"    4769549830  171.7&lt;br&gt;
"jane"  5000799124  180&lt;br&gt;
"C7"    5014097839  180.5&lt;br&gt;
"B1"    5444659173  196&lt;br&gt;
"A6"    6210502707  223.5&lt;br&gt;
"A0"    6511384141  234.4&lt;br&gt;
"B9"    7292819872  262.5&lt;br&gt;
"C3"    7330467663  263.8&lt;br&gt;
"C5"    7502566333  270&lt;br&gt;
"bill"  7594634739  273.4&lt;br&gt;
"A4"    8047401090  289.7&lt;br&gt;
"C2"    8605012288  309.7&lt;br&gt;
"A8"    8997397092  323.9&lt;br&gt;
"B7"    9038880553  325.3&lt;br&gt;
"B5"    9368225254  337.2&lt;br&gt;
"B6"    9379713761  337.6&lt;br&gt;
"steve" 9787173343  352.3&lt;br&gt;
KEY HASH    ANGLE (DEG) LABEL   SERVER&lt;br&gt;
"john"  1632929716  58.7    "B2"    B&lt;br&gt;
"kate"  3421831276  123.1   "A5"    A&lt;br&gt;
"jane"  5000648311  180 "C7"    C&lt;br&gt;
"bill"  7594873884  273.4   "A4"    A&lt;br&gt;
"steve" 9786437450  352.3   "C6"    C&lt;br&gt;
So, what’s the benefit of all this circle approach? Imagine server C is removed. To account for this, we must remove labels C0 .. C9 from the circle. This results in the object keys formerly adjacent to the deleted labels now being randomly labeled Ax and Bx, reassigning them to servers A and B.&lt;/p&gt;

&lt;p&gt;But what happens with the other object keys, the ones that originally belonged in A and B? Nothing! That’s the beauty of it: The absence of Cx labels does not affect those keys in any way. So, removing a server results in its object keys being randomly reassigned to the rest of the servers, leaving all other keys untouched:&lt;/p&gt;

&lt;p&gt;Consistent Hashing Example 6&lt;/p&gt;

&lt;p&gt;KEY HASH    ANGLE (DEG)&lt;br&gt;
"A1"    473914830   17&lt;br&gt;
"A2"    548798874   19.7&lt;br&gt;
"A3"    1466730567  52.8&lt;br&gt;
"john"  1633428562  58.7&lt;br&gt;
"B2"    1808009038  65&lt;br&gt;
"B3"    2058758486  74.1&lt;br&gt;
"A7"    2162578920  77.8&lt;br&gt;
"B4"    2660265921  95.7&lt;br&gt;
"kate"  3421657995  123.1&lt;br&gt;
"A5"    3434972143  123.6&lt;br&gt;
"B0"    4049028775  145.7&lt;br&gt;
"B8"    4755525684  171.1&lt;br&gt;
"A9"    4769549830  171.7&lt;br&gt;
"jane"  5000799124  180&lt;br&gt;
"B1"    5444659173  196&lt;br&gt;
"A6"    6210502707  223.5&lt;br&gt;
"A0"    6511384141  234.4&lt;br&gt;
"B9"    7292819872  262.5&lt;br&gt;
"bill"  7594634739  273.4&lt;br&gt;
"A4"    8047401090  289.7&lt;br&gt;
"A8"    8997397092  323.9&lt;br&gt;
"B7"    9038880553  325.3&lt;br&gt;
"B5"    9368225254  337.2&lt;br&gt;
"B6"    9379713761  337.6&lt;br&gt;
"steve" 9787173343  352.3&lt;br&gt;
KEY HASH    ANGLE (DEG) LABEL   SERVER&lt;br&gt;
"john"  1632929716  58.7    "B2"    B&lt;br&gt;
"kate"  3421831276  123.1   "A5"    A&lt;br&gt;
"jane"  5000648311  180 "B1"    B&lt;br&gt;
"bill"  7594873884  273.4   "A4"    A&lt;br&gt;
"steve" 9786437450  352.3   "A1"    A&lt;br&gt;
Something similar happens if, instead of removing a server, we add one. If we wanted to add server D to our example (say, as a replacement for C), we would need to add labels D0 .. D9. The result would be that roughly one-third of the existing keys (all belonging to A or B) would be reassigned to D, and, again, the rest would stay the same:&lt;/p&gt;

&lt;p&gt;Consistent Hashing Example 7&lt;/p&gt;

&lt;p&gt;KEY HASH    ANGLE (DEG)&lt;br&gt;
"D2"    439890723   15.8&lt;br&gt;
"A1"    473914830   17&lt;br&gt;
"A2"    548798874   19.7&lt;br&gt;
"D8"    796709216   28.6&lt;br&gt;
"D1"    1008580939  36.3&lt;br&gt;
"A3"    1466730567  52.8&lt;br&gt;
"D5"    1587548309  57.1&lt;br&gt;
"john"  1633428562  58.7&lt;br&gt;
"B2"    1808009038  65&lt;br&gt;
"B3"    2058758486  74.1&lt;br&gt;
"A7"    2162578920  77.8&lt;br&gt;
"B4"    2660265921  95.7&lt;br&gt;
"D4"    2909395217  104.7&lt;br&gt;
"kate"  3421657995  123.1&lt;br&gt;
"A5"    3434972143  123.6&lt;br&gt;
"D7"    3567129743  128.4&lt;br&gt;
"B0"    4049028775  145.7&lt;br&gt;
"B8"    4755525684  171.1&lt;br&gt;
"A9"    4769549830  171.7&lt;br&gt;
"jane"  5000799124  180&lt;br&gt;
"B1"    5444659173  196&lt;br&gt;
"D6"    5703092354  205.3&lt;br&gt;
"A6"    6210502707  223.5&lt;br&gt;
"A0"    6511384141  234.4&lt;br&gt;
"B9"    7292819872  262.5&lt;br&gt;
"bill"  7594634739  273.4&lt;br&gt;
"A4"    8047401090  289.7&lt;br&gt;
"D0"    8272587142  297.8&lt;br&gt;
"A8"    8997397092  323.9&lt;br&gt;
"B7"    9038880553  325.3&lt;br&gt;
"D3"    9048608874  325.7&lt;br&gt;
"D9"    9314459653  335.3&lt;br&gt;
"B5"    9368225254  337.2&lt;br&gt;
"B6"    9379713761  337.6&lt;br&gt;
"steve" 9787173343  352.3&lt;br&gt;
KEY HASH    ANGLE (DEG) LABEL   SERVER&lt;br&gt;
"john"  1632929716  58.7    "B2"    B&lt;br&gt;
"kate"  3421831276  123.1   "A5"    A&lt;br&gt;
"jane"  5000648311  180 "B1"    B&lt;br&gt;
"bill"  7594873884  273.4   "A4"    A&lt;br&gt;
"steve" 9786437450  352.3   "D2"    D&lt;br&gt;
This is how consistent hashing solves the rehashing problem.&lt;/p&gt;

&lt;p&gt;In general, only k/N keys need to be remapped when k is the number of keys and N is the number of servers (more specifically, the maximum of the initial and final number of servers).&lt;/p&gt;

&lt;p&gt;What Next?&lt;br&gt;
We observed that when using distributed caching to optimize performance, it may happen that the number of caching servers changes (reasons for this may be a server crashing, or the need to add or remove a server to increase or decrease overall capacity). By using consistent hashing to distribute keys between the servers, we can rest assured that should that happen, the number of keys being rehashed—and therefore, the impact on origin servers—will be minimized, preventing potential downtime or performance issues.&lt;/p&gt;

&lt;p&gt;There are clients for several systems, such as Memcached and Redis, that include support for consistent hashing out of the box.&lt;/p&gt;

&lt;p&gt;Alternatively, you can implement the algorithm yourself, in your language of choice, and that should be relatively easy once the concept is understood.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Integrating Salesforce with external Web Applications</title>
      <dc:creator>Karthik Jasthi</dc:creator>
      <pubDate>Thu, 20 Feb 2020 03:25:52 +0000</pubDate>
      <link>https://dev.to/karthikjasthi/integrating-salesforce-with-external-web-applications-4k8p</link>
      <guid>https://dev.to/karthikjasthi/integrating-salesforce-with-external-web-applications-4k8p</guid>
      <description>&lt;p&gt;Need to configure a connected app on the Salesforce is required for external API calls to work. Please refer to this link - &lt;a href="https://trailhead.salesforce.com/content/learn/modules/connected-app-basics"&gt;https://trailhead.salesforce.com/content/learn/modules/connected-app-basics&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example API calls:&lt;/p&gt;

&lt;p&gt;To make the initial authorization request for a user to grant your app access to their data (this is where your user is initially directed to a Saleforce.com authorization endpoint and logs in) you’d make the following request. The client_id in the below call will be your consumer ID from the connected app. The redirect_uri will be the Callback URL.&lt;/p&gt;

&lt;p&gt;curl &lt;a href="https://login.salesforce.com/services/oauth2/authorize?response_type=code"&gt;https://login.salesforce.com/services/oauth2/authorize?response_type=code&lt;/a&gt;&lt;br&gt;
&amp;amp;client_id=YOURCONSUMERID&amp;amp;redirect_uri=&lt;a href="https://www.yourappname.com/api/callback"&gt;https://www.yourappname.com/api/callback&lt;/a&gt;&lt;br&gt;
A successful response from this will redirect the page to a Salesforce login page where the user is able to login and authenticate. After Salesforce confirms that the client has authorized your app to access their data, the end-users browser is redirected to the callback URL you’ve specified by the redirect_uri parameter. Salesforce then appends an authorization code to the redirect URL, their request will look similar to the below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.yourappname.com/api/callback?code=aWekysIEeqM9PiThEfm0Cnr6MoLIfwWyRJcqOqHdF8f9INokharAS09ia7UNP6RiVScerfhc4w%3D%3D"&gt;https://www.yourappname.com/api/callback?code=aWekysIEeqM9PiThEfm0Cnr6MoLIfwWyRJcqOqHdF8f9INokharAS09ia7UNP6RiVScerfhc4w%3D%3D&lt;/a&gt;&lt;br&gt;
You’ll use this as the value for your code parameter when you make a request to Salesforce’s token endpoint to receive your Access and Refresh Token.&lt;/p&gt;

&lt;p&gt;Example request:&lt;/p&gt;

&lt;p&gt;curl login.salesforce.com/services/oauth2/token?grant_type=authorization_code&amp;amp;redirect_uri=&lt;a href="https://www.yourappname.com/api/callback&amp;amp;client_id=YOUR_CONSUMER_ID&amp;amp;client_secret=YOUR_CONSUMER_SECRET&amp;amp;code=aWekysIEeqM9PiThEfm0Cnr6MoLIfwWyRJcqOqHdF8f9INokharAS09ia7UNP6RiVScerfhc4w%3D%3D"&gt;https://www.yourappname.com/api/callback&amp;amp;client_id=YOUR_CONSUMER_ID&amp;amp;client_secret=YOUR_CONSUMER_SECRET&amp;amp;code=aWekysIEeqM9PiThEfm0Cnr6MoLIfwWyRJcqOqHdF8f9INokharAS09ia7UNP6RiVScerfhc4w%3D%3D&lt;/a&gt;&lt;br&gt;
Example Response:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "access_token": "YOUR_ACCESS_TOKEN",&lt;br&gt;
  "refresh_token": "YOUR_REFRESH_TOKEN",&lt;br&gt;
  "signature": "signature",&lt;br&gt;
  "scope": "refresh_token api id",&lt;br&gt;
  "instance_url": "&lt;a href="https://instance.salesforce.com"&gt;https://instance.salesforce.com&lt;/a&gt;",&lt;br&gt;
  "id": "&lt;a href="https://login.salesforce.com/id/id"&gt;https://login.salesforce.com/id/id&lt;/a&gt;,&lt;br&gt;
  "token_type": "Bearer",&lt;br&gt;
  "issued_at": "timestamp"&lt;br&gt;
}&lt;br&gt;
Outside of the access and response token, the instance_url is import also. It’s what you’ll need to build the base of your future API calls.&lt;/p&gt;

&lt;p&gt;Now we have the access token, we’re able to start making requests to send and receive data on our user's behalf. Something to keep in mind though, as mentioned earlier, is that these access tokens will always expire at some point.&lt;/p&gt;

&lt;p&gt;Due to that, you’ll want to keep your access token up to date by making a call to the token endpoint and changing the grant_type to ‘refresh_token’ along with including the refresh token you had received in the previous call.&lt;/p&gt;

&lt;p&gt;Example call:&lt;/p&gt;

&lt;p&gt;curl &lt;a href="https://login.salesforce.com/services/oauth2/token?grant_type=refresh_token&amp;amp;client_id=YOUR_CONSUMER__ID&amp;amp;client_secret=YOUR_CONSUMER__SECRET&amp;amp;refresh_token=YOUR_REFRESH_TOKEN"&gt;https://login.salesforce.com/services/oauth2/token?grant_type=refresh_token&amp;amp;client_id=YOUR_CONSUMER__ID&amp;amp;client_secret=YOUR_CONSUMER__SECRET&amp;amp;refresh_token=YOUR_REFRESH_TOKEN&lt;/a&gt;&lt;br&gt;
Example response:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "access_token": "REFRESHED_ACCESS_TOKEN",&lt;br&gt;
  "signature": "signature",&lt;br&gt;
  "scope": "refresh_token id api",&lt;br&gt;
  "instance_url": "&lt;a href="https://INSTANCE.salesforce.com"&gt;https://INSTANCE.salesforce.com&lt;/a&gt;",&lt;br&gt;
  "id": "&lt;a href="https://login.salesforce.com/id/idE"&gt;https://login.salesforce.com/id/idE&lt;/a&gt;",&lt;br&gt;
  "token_type": "Bearer",&lt;br&gt;
  "issued_at": "timestamp"&lt;br&gt;
}&lt;br&gt;
Now we have a way to keep our access tokens valid and up to date, we’re set up and ready to start working with Salesforce objects.&lt;/p&gt;

&lt;p&gt;Understanding Salesforce objects&lt;br&gt;
Salesforce objects (sObjects) are effectively database tables that contain an organization’s data. Examples of standard Salesforce objects will be “Accounts”, “Contacts”, “Leads”, and “Tasks.” You also have scope to create your own custom objects.&lt;/p&gt;

&lt;p&gt;A Salesforce record describes a specific occurrence of an object (such as a specific contact like “Jonny Appleseed” that is represented by a Contact object). A basic comparison would be like a row in a database table.&lt;/p&gt;

&lt;p&gt;For the following examples, we’re just going to focus on Contacts.&lt;/p&gt;

&lt;p&gt;Send data from your app to Salesforce&lt;br&gt;
Creating a contact in salesforce is really straightforward. You just need to build the API url using the instance from your access token response and use the access token value as your bearer token in the header.&lt;/p&gt;

&lt;p&gt;One thing to keep an eye out for through is for characters that need to be escaped in your access token.&lt;/p&gt;

&lt;p&gt;For example, this access token should have the exclamation mark escaped&lt;/p&gt;

&lt;p&gt;So this:&lt;/p&gt;

&lt;p&gt;00D1r000000dumU!AQEAQFd.O1Q5DVQrUYvr.........&lt;br&gt;
Becomes this:&lt;/p&gt;

&lt;p&gt;00D1r000000dumU!AQEAQFd.O1Q5DVQrUYvr........&lt;br&gt;
you can then make the below call to create a contact.&lt;/p&gt;

&lt;p&gt;Example request&lt;/p&gt;

&lt;p&gt;curl &lt;a href="https://INSTANCE.salesforce.com/services/data/v42.0/sobjects/Contact"&gt;https://INSTANCE.salesforce.com/services/data/v42.0/sobjects/Contact&lt;/a&gt; -H "Authorization: Bearer YOUR_ACCESS_TOKEN" -H "Content-Type: application/json" -d '{"FirstName" : "Johnny", "LastName" : "Appleseed"}'&lt;br&gt;
(Your contact will need a last name as the minimum for an entry to be created.)&lt;/p&gt;

&lt;p&gt;The response you get back will be the id of your contact&lt;/p&gt;

&lt;p&gt;{"id":"0031r000029NDckAAG","success":true,"errors":[]}&lt;br&gt;
Which will also let you build a link directly to the contact.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://INSTANCE.salesforce.com/0031r000029NDckAAG"&gt;https://INSTANCE.salesforce.com/0031r000029NDckAAG&lt;/a&gt;&lt;br&gt;
Retrieving data from Salesforce to your app&lt;br&gt;
If you want to retrieve a list of contacts there are a few ways you can do it. You can make a request to the contact endpoint and it will return a bunch of information about your contacts that I found a bit cumbersome to navigate.&lt;/p&gt;

&lt;p&gt;I actually prefer to use a combination a contacts ‘describe’ endpoint, which will return all of the fields we can populate about our user.&lt;/p&gt;

&lt;p&gt;Example request:&lt;/p&gt;

&lt;p&gt;curl &lt;a href="https://INSTANCE.salesforce.com/services/data/v20.0/sobjects/Contact/describe"&gt;https://INSTANCE.salesforce.com/services/data/v20.0/sobjects/Contact/describe&lt;/a&gt; -H  "Authorization: Bearer YOUR_ACCESS_TOKEN"&lt;br&gt;
That will give a detailed response of all of the fields available. (I’ve just given an example of the ‘first name’ element for brevity)&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
      "autoNumber": false,&lt;br&gt;
      "byteLength": 120,&lt;br&gt;
      "calculated": false,&lt;br&gt;
      "calculatedFormula": null,&lt;br&gt;
      "caseSensitive": false,&lt;br&gt;
      "controllerName": null,&lt;br&gt;
      "createable": true,&lt;br&gt;
      "custom": false,&lt;br&gt;
      "defaultValue": null,&lt;br&gt;
      "defaultValueFormula": null,&lt;br&gt;
      "defaultedOnCreate": false,&lt;br&gt;
      "dependentPicklist": false,&lt;br&gt;
      "deprecatedAndHidden": false,&lt;br&gt;
      "digits": 0,&lt;br&gt;
      "externalId": false,&lt;br&gt;
      "filterable": true,&lt;br&gt;
      "groupable": true,&lt;br&gt;
      "htmlFormatted": false,&lt;br&gt;
      "idLookup": false,&lt;br&gt;
      "inlineHelpText": null,&lt;br&gt;
      "label": "First Name",&lt;br&gt;
      "length": 40,&lt;br&gt;
      "name": "FirstName",&lt;br&gt;
      "nameField": false,&lt;br&gt;
      "namePointing": false,&lt;br&gt;
      "nillable": true,&lt;br&gt;
      "picklistValues": [],&lt;br&gt;
      "precision": 0,&lt;br&gt;
      "referenceTo": [],&lt;br&gt;
      "relationshipName": null,&lt;br&gt;
      "relationshipOrder": null,&lt;br&gt;
      "restrictedPicklist": false,&lt;br&gt;
      "scale": 0,&lt;br&gt;
      "soapType": "xsd:string",&lt;br&gt;
      "sortable": true,&lt;br&gt;
      "type": "string",&lt;br&gt;
      "unique": false,&lt;br&gt;
      "updateable": true,&lt;br&gt;
      "writeRequiresMasterRead": false&lt;br&gt;
    }&lt;br&gt;
Once you’ve got the fields you can then use them (or a selection) to build a custom query:&lt;/p&gt;

&lt;p&gt;curl &lt;a href="https://INstance.salesforce.com/services/data/v42.0/query/?q=SELECT+id,name,email,phone+from+Contact"&gt;https://INstance.salesforce.com/services/data/v42.0/query/?q=SELECT+id,name,email,phone+from+Contact&lt;/a&gt; -H 'Authorization: Bearer YOUR_ACCESS_TOKEN'&lt;br&gt;
That will return all contacts with their associated properties.&lt;/p&gt;

&lt;p&gt;{"totalSize":1,"done":true,"records":[{"attributes":{"type":"Contact","url":"/services/data/v42.0/sobjects/Contact/id"},"Id":"id","Name":"Jonny Appleseed","Email":"&lt;a href="mailto:jonny.appleseed@myfriend.com"&gt;jonny.appleseed@myfriend.com&lt;/a&gt;","Phone":"555-555-555"} ]}&lt;/p&gt;

&lt;p&gt;That should now give you a way to retrieve contact data from Salesforce to use within your app.&lt;/p&gt;

&lt;p&gt;This is a good material for how to use the refresh token to get access token and some good practices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.oauth.com/oauth2-servers/making-authenticated-requests/refreshing-an-access-token/"&gt;https://www.oauth.com/oauth2-servers/making-authenticated-requests/refreshing-an-access-token/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>What is Tail Call Optimization?</title>
      <dc:creator>Karthik Jasthi</dc:creator>
      <pubDate>Sun, 09 Feb 2020 06:27:45 +0000</pubDate>
      <link>https://dev.to/karthikjasthi/what-is-tail-call-optimization-3o58</link>
      <guid>https://dev.to/karthikjasthi/what-is-tail-call-optimization-3o58</guid>
      <description>&lt;p&gt;To understand what tail call optimization (TCO) is, we will examine the following piece of code. I’ll first explain how it is executed without TCO and then with TCO.&lt;/p&gt;

&lt;p&gt;function id(x) {&lt;br&gt;
    return x; // (A)&lt;br&gt;
}&lt;br&gt;
function f(a) {&lt;br&gt;
    let b = a + 1;&lt;br&gt;
    return id(b); // (B)&lt;br&gt;
}&lt;br&gt;
console.log(f(2)); // (C)&lt;/p&gt;

&lt;p&gt;Normal execution &lt;/p&gt;

&lt;p&gt;Let’s assume there is a JavaScript engine that manages function calls by storing local variables and return addresses on a stack. How would such an engine execute the code?&lt;/p&gt;

&lt;p&gt;Step 1. Initially, there are only the global variables id and f on the stack.&lt;/p&gt;

&lt;p&gt;The block of stack entries encodes the state (local variables, including parameters) of the current scope and is called a stack frame.&lt;/p&gt;

&lt;p&gt;Step 2. In line C, f() is called: First, the location to return to is saved on the stack. Then f’s parameters are allocated and execution jumps to its body. The stack now looks as follows.&lt;/p&gt;

&lt;p&gt;There are now two frames on the stack: One for the global scope (bottom) and one for f() (top). f’s stack frame includes the return address, line C.&lt;/p&gt;

&lt;p&gt;Step 3. id() is called in line B. Again, a stack frame is created that contains the return address and id’s parameter.&lt;/p&gt;

&lt;p&gt;Step 4. In line A, the result x is returned. id’s stack frame is removed and execution jumps to the return address, line B. (There are several ways in which returning a value could be handled. Two common solutions are to leave the result on a stack or to hand it over in a register. I ignore this part of execution here.)&lt;/p&gt;

&lt;p&gt;The stack now looks as follows:&lt;/p&gt;

&lt;p&gt;Step 5. In line B, the value that was returned by id is returned to f’s caller. Again, the topmost stack frame is removed and execution jumps to the return address, line C.&lt;/p&gt;

&lt;p&gt;Step 6. Line C receives the value 3 and logs it.&lt;/p&gt;

&lt;p&gt;Tail call optimization&lt;br&gt;&lt;br&gt;
function id(x) {&lt;br&gt;
    return x; // (A)&lt;br&gt;
}&lt;br&gt;
function f(a) {&lt;br&gt;
    let b = a + 1;&lt;br&gt;
    return id(b); // (B)&lt;br&gt;
}&lt;br&gt;
console.log(f(2)); // (C)&lt;br&gt;
If you look at the previous section then there is one step that is unnecessary – step 5. All that happens in line B is that the value returned by id() is passed on to line C. Ideally, id() could do that itself and the intermediate step could be skipped.&lt;/p&gt;

&lt;p&gt;We can make this happen by implementing the function call in line B differently. Before the call happens, the stack looks as follows.&lt;/p&gt;

&lt;p&gt;If we examine the call we see that it is the very last action in f(). Once id() is done, the only remaining action performed by f() is to pass id’s result to f’s caller. Therefore, f’s variables are not needed, anymore and its stack frame can be removed before making the call. The return address given to id() is f’s return address, line C. During the execution of id(), the stack looks like this:&lt;/p&gt;

&lt;p&gt;Then id() returns the value 3. You could say that it returns that value for f(), because it transports it to f’s caller, line C.&lt;/p&gt;

&lt;p&gt;Let’s review: The function call in line B is a tail call. Such a call can be done with zero stack growth. To find out whether a function call is a tail call, we must check whether it is in a tail position (i.e., the last action in a function). How that is done is explained in the next section.&lt;/p&gt;

&lt;p&gt;Checking whether a function call is in a tail position&lt;br&gt;&lt;br&gt;
We have just learned that tail calls are function calls that can be executed more efficiently. But what counts as a tail call?&lt;/p&gt;

&lt;p&gt;First, the way in which you call a function does not matter. The following calls can all be optimized if they appear in a tail position:&lt;/p&gt;

&lt;p&gt;Function call: func(···)&lt;br&gt;
Dispatched method call: obj.method(···)&lt;br&gt;
Direct method call via call(): func.call(···)&lt;br&gt;
Direct method call via apply(): func.apply(···)&lt;br&gt;
Tail calls in expressions&lt;br&gt;&lt;br&gt;
Arrow functions can have expressions as bodies. For tail call optimization, we therefore have to figure out where function calls are in tail positions in expressions. Only the following expressions can contain tail calls:&lt;/p&gt;

&lt;p&gt;The conditional operator (? :)&lt;br&gt;
The logical Or operator (||)&lt;br&gt;
The logical And operator (&amp;amp;&amp;amp;)&lt;br&gt;
The comma operator (,)&lt;br&gt;
Let’s look at an example for each one of them.&lt;/p&gt;

&lt;p&gt;The conditional operator (? :)&lt;br&gt;&lt;br&gt;
const a = x =&amp;gt; x ? f() : g();&lt;br&gt;
Both f() and g() are in tail position.&lt;/p&gt;

&lt;p&gt;The logical Or operator (||)&lt;br&gt;&lt;br&gt;
const a = () =&amp;gt; f() || g();&lt;br&gt;
f() is not in a tail position, but g() is in a tail position. To see why, take a look at the following code, which is equivalent to the previous code:&lt;/p&gt;

&lt;p&gt;const a = () =&amp;gt; {&lt;br&gt;
    let fResult = f(); // not a tail call&lt;br&gt;
    if (fResult) {&lt;br&gt;
        return fResult;&lt;br&gt;
    } else {&lt;br&gt;
        return g(); // tail call&lt;br&gt;
    }&lt;br&gt;
};&lt;br&gt;
The result of the logical Or operator depends on the result of f(), which is why that function call is not in a tail position (the caller does something with it other than returning it). However, g() is in a tail position.&lt;/p&gt;

&lt;p&gt;The logical And operator&lt;br&gt;&lt;br&gt;
const a = () =&amp;gt; f() &amp;amp;&amp;amp; g();&lt;br&gt;
f() is not in a tail position, but g() is in a tail position. To see why, take a look at the following code, which is equivalent to the previous code:&lt;/p&gt;

&lt;p&gt;const a = () =&amp;gt; {&lt;br&gt;
    let fResult = f(); // not a tail call&lt;br&gt;
    if (!fResult) {&lt;br&gt;
        return fResult;&lt;br&gt;
    } else {&lt;br&gt;
        return g(); // tail call&lt;br&gt;
    }&lt;br&gt;
};&lt;br&gt;
The result of the logical And operator depends on the result of f(), which is why that function call is not in a tail position (the caller does something with it other than returning it). However, g() is in a tail position.&lt;/p&gt;

&lt;p&gt;The comma operator (,)&lt;br&gt;&lt;br&gt;
const a = () =&amp;gt; (f() , g());&lt;br&gt;
f() is not in a tail position, but g() is in a tail position. To see why, take a look at the following code, which is equivalent to the previous code:&lt;/p&gt;

&lt;p&gt;const a = () =&amp;gt; {&lt;br&gt;
    f();&lt;br&gt;
    return g();&lt;br&gt;
}&lt;br&gt;
Tail calls in statements&lt;br&gt;&lt;br&gt;
For statements, the following rules apply.&lt;/p&gt;

&lt;p&gt;Only these compound statements can contain tail calls:&lt;/p&gt;

&lt;p&gt;Blocks (as delimited by {}, with or without a label)&lt;br&gt;
if: in either the “then” clause or the “else” clause.&lt;br&gt;
do-while, while, for: in their bodies.&lt;br&gt;
switch: in its body.&lt;br&gt;
try-catch: only in the catch clause. The try clause has the catch clause as a context that can’t be optimized away.&lt;br&gt;
try-finally, try-catch-finally: only in the finally clause, which is a context of the other clauses that can’t be optimized away.&lt;br&gt;
Of all the atomic (non-compound) statements, only return can contain a tail call. All other statements have context that can’t be optimized away. The following statement contains a tail call if expr contains a tail call.&lt;/p&gt;

&lt;p&gt;return «expr»;&lt;br&gt;
Tail call optimization can only be made in strict mode&lt;br&gt;&lt;br&gt;
In non-strict mode, most engines have the following two properties that allow you to examine the call stack:&lt;/p&gt;

&lt;p&gt;func.arguments: contains the arguments of the most recent invocation of func.&lt;br&gt;
func.caller: refers to the function that most recently called func.&lt;br&gt;
With tail call optimization, these properties don’t work, because the information that they rely on may have been removed. Therefore, strict mode forbids these properties (as described in the language specification) and tail call optimization only works in strict mode.&lt;/p&gt;

&lt;p&gt;Pitfall: solo function calls are never in tail position&lt;br&gt;&lt;br&gt;
The function call bar() in the following code is not in tail position:&lt;/p&gt;

&lt;p&gt;function foo() {&lt;br&gt;
    bar(); // this is not a tail call in JS&lt;br&gt;
}&lt;br&gt;
The reason is that the last action of foo() is not the function call bar(), it is (implicitly) returning undefined. In other words, foo() behaves like this:&lt;/p&gt;

&lt;p&gt;function foo() {&lt;br&gt;
    bar();&lt;br&gt;
    return undefined;&lt;br&gt;
}&lt;br&gt;
Callers can rely on foo() always returning undefined. If bar() were to return a result for foo(), due to tail call optimization, then that would change foo’s behavior.&lt;/p&gt;

&lt;p&gt;Therefore, if we want bar() to be a tail call, we have to change foo() as follows.&lt;/p&gt;

&lt;p&gt;function foo() {&lt;br&gt;
    return bar(); // tail call&lt;br&gt;
}&lt;br&gt;
Tail-recursive functions&lt;br&gt;&lt;br&gt;
A function is tail-recursive if the main recursive calls it makes are in tail positions.&lt;/p&gt;

&lt;p&gt;For example, the following function is not tail recursive, because the main recursive call in line A is not in a tail position:&lt;/p&gt;

&lt;p&gt;function factorial(x) {&lt;br&gt;
    if (x &amp;lt;= 0) {&lt;br&gt;
        return 1;&lt;br&gt;
    } else {&lt;br&gt;
        return x * factorial(x-1); // (A)&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
factorial() can be implemented via a tail-recursive helper function facRec(). The main recursive call in line A is in a tail position.&lt;/p&gt;

&lt;p&gt;function factorial(n) {&lt;br&gt;
    return facRec(n, 1);&lt;br&gt;
}&lt;br&gt;
function facRec(x, acc) {&lt;br&gt;
    if (x &amp;lt;= 1) {&lt;br&gt;
        return acc;&lt;br&gt;
    } else {&lt;br&gt;
        return facRec(x-1, x*acc); // (A)&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
That is, some non-tail-recursive functions can be transformed into tail-recursive functions.&lt;/p&gt;

&lt;p&gt;Tail-recursive loops&lt;br&gt;&lt;br&gt;
Tail call optimization makes it possible to implement loops via recursion without growing the stack. The following are two examples.&lt;/p&gt;

&lt;p&gt;forEach()&lt;br&gt;&lt;br&gt;
function forEach(arr, callback, start = 0) {&lt;br&gt;
    if (0 &amp;lt;= start &amp;amp;&amp;amp; start &amp;lt; arr.length) {&lt;br&gt;
        callback(arr[start], start, arr);&lt;br&gt;
        return forEach(arr, callback, start+1); // tail call&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
forEach(['a', 'b'], (elem, i) =&amp;gt; console.log(&lt;code&gt;${i}. ${elem}&lt;/code&gt;));&lt;/p&gt;

&lt;p&gt;// Output:&lt;br&gt;
// 0. a&lt;br&gt;
// 1. b&lt;br&gt;
findIndex()&lt;br&gt;&lt;br&gt;
function findIndex(arr, predicate, start = 0) {&lt;br&gt;
    if (0 &amp;lt;= start &amp;amp;&amp;amp; start &amp;lt; arr.length) {&lt;br&gt;
        if (predicate(arr[start])) {&lt;br&gt;
            return start;&lt;br&gt;
        }&lt;br&gt;
        return findIndex(arr, predicate, start+1); // tail call&lt;br&gt;
    }&lt;br&gt;
}&lt;br&gt;
findIndex(['a', 'b'], x =&amp;gt; x === 'b'); // 1&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
