How do you compare two large and complex items efficiently? You could go through each data set iteratively and compare the two, but this is time consuming and computing expensive, because the data sets are large and complex. Comparing the two iteratively would mean an o(n) function for each set and this would be prohibitive for good searching.
Hash functions at their core take in data of variable-length and produce outputs of fixed-data length. Hash tables should be able to store items in key-value pairs and should be able to lookup an item by their key. We can lookup an item by using the hash of that key as the index of the item itself which allows the user to search the item simple by computing its hash. One interesting problem with Hashes is under conditions where the key conflicts with other keys know as collisions. The data sets are large and complex by nature, so the hash of an index may land on the index of another key. To mitigate this issue, each item that generates the same hash can be thrown into a linked list called a "bucket". So we can create a function that updates the hash table and places a key into a bucket and a function that searches for the key with a hashing function to return the key value pair we want. This is the hashing function I came up with using prime numbers 17 and 13 to generate my unique hashing keys for a fairly simple solution:
Top comments (0)