DEV Community

Cover image for Building a High-Performance Full-Text Search Engine in Go
Ravi Kishan
Ravi Kishan

Posted on

12

Building a High-Performance Full-Text Search Engine in Go

1. Introduction

In today’s world, where vast amounts of information are constantly being generated, accessing relevant data efficiently is essential. Full-text search engines enable fast data retrieval by indexing textual content, forming the backbone of applications ranging from search engines to data analytics tools. Given the massive datasets involved, search engines require a sophisticated approach to indexing and querying for optimal performance.

This blog will walk you through building a full-text search engine using Go, focusing on advanced concepts like data streaming, multithreading, and efficient indexing structures. You’ll see how to handle and search through large datasets—specifically Wikipedia abstracts—in a memory-efficient way. By following this guide, you’ll gain insights into leveraging Go’s concurrency model and its suitability for high-performance applications.


2. Technology Stack

The technology stack for this project includes Go as the primary programming language, selected for its straightforward syntax, robust standard library, and native concurrency support. Here’s a breakdown of the essential tools and libraries:

  • Programming Language: Go (Golang)

    • Go offers an efficient environment for concurrent applications with tools for managing multiple tasks without sacrificing performance.
  • Libraries:

    • Gzip and XML Parsing: These are essential for processing Wikipedia’s compressed XML data. The standard encoding/xml and compress/gzip libraries allow straightforward parsing and decompression, fitting well within Go's ecosystem.
    • Sync Package: This core Go package is used for managing concurrent processes with constructs like sync.WaitGroup for coordinating goroutines and sync.Mutex for handling data access.
    • kljensen/snowball: This library provides stemming for tokens, allowing better search optimization by reducing words to their base forms.
  • Data Source:

    • The project utilizes Wikipedia abstracts, a compressed XML file containing summaries of Wikipedia articles. This dataset is diverse and large enough to serve as a practical test for the search engine’s capabilities. Download Here

3. Root of the Idea

Problem Statement

With ever-growing data volumes, retrieving meaningful information efficiently is a significant challenge. Search engines need to manage and access vast textual datasets quickly, a problem that has led to innovations like inverted indexes, tokenization, and data normalization.

Inspiration and Research

Popular tools like Elasticsearch demonstrate the power of a full-text search engine built on robust indexing and retrieval techniques. Inspired by these industry-standard engines, this project seeks to implement a similar solution in Go. Go’s simplicity, performance, and concurrency features make it well-suited for this task, offering the ability to explore concepts used by major search engines and tailor them to a custom implementation.

Intended Users

This project is designed for those interested in understanding how search engines work under the hood, as well as developers and enthusiasts eager to explore Go’s concurrency model. By providing hands-on experience, it’s an opportunity to grasp how Go can handle intensive tasks like real-time indexing and searching, especially for those interested in backend and full-stack development.


4. Reasons for Building This Project

Hands-on Learning

This project offers a practical approach to mastering streaming and multithreading in Go, as well as diving into how full-text search engines work. It allows for experimentation with indexing, tokenization, and document processing, providing a comprehensive understanding of search engine internals.

Efficiency in Go

By using Go, you explore its high concurrency efficiency. Go is well-suited for building applications requiring multiple tasks to run in parallel, making it an ideal language for this project’s performance-focused objectives.

Improving Go Skills

This project builds advanced skills in Go, a language widely used in cloud-native and scalable applications. It provides exposure to implementing multithreading and concurrency solutions while highlighting Go's unique approach to managing memory and performance in high-demand applications.


5. The Working Process and Key Concepts

Overview of the Workflow

The engine follows a structured workflow involving multiple stages:

  1. Document Loading: Documents are loaded and decompressed from the XML file in a streaming fashion, minimizing memory usage.
  2. Tokenization and Text Processing: Each document is tokenized, with text normalized by converting to lowercase, removing stopwords, and applying stemming.
  3. Index Construction: The processed tokens are stored in an inverted index, mapping each token to the document IDs containing it.
  4. Saving/Loading Index: The final index can be saved and loaded from disk, preserving the indexing work for future sessions and speeding up the search engine’s initialization.

Search Flow Chart

Data Streaming and Processing

Streaming allows for processing documents one at a time without loading the entire dataset into memory. The LoadDocuments function handles decompression and parsing in real time, feeding each document into a channel. This setup ensures that the system handles large datasets by sequentially processing data, reducing memory strain.

Concurrency in Document Processing

Document processing is concurrent, with multiple goroutines responsible for parsing, analyzing, and indexing documents. This concurrency significantly accelerates the indexing process and allows for real-time search updates.


6. Brief Introduction to Streaming and Multithreading

Streaming in Go

Definition and Purpose

Streaming is a technique where data is processed in chunks as it becomes available, rather than loading it all at once. This is particularly useful for large datasets where loading the entire dataset is impractical due to memory limitations.

Benefits for Large Datasets

Streaming helps manage memory efficiently by only handling one small part of the data at any given time, which is ideal for this search engine. The system doesn’t need to load all Wikipedia abstracts at once; instead, it processes each document individually in a steady flow.

Implementation Example

The LoadDocuments function loads and decompresses documents in a streaming manner, using Go’s encoding/xml and compress/gzip libraries to parse and send each document to a processing channel.

Multithreading in Go

Definition and Core Concepts

Multithreading allows simultaneous execution of code segments, increasing application performance by running several operations at once. Go’s native concurrency model, with goroutines and channels, provides a straightforward way to achieve multithreading.

Concurrency in Go

Concurrency in Go is achieved using goroutines, which are lightweight threads that allow for multiple functions to run simultaneously. Channels enable communication between goroutines, ensuring that data can be passed securely without the need for complex synchronization.

How It’s Used Here

In this search engine, multiple goroutines handle document processing and indexing concurrently. For example, the AddStreamed function reads from a channel of documents and indexes each one concurrently, allowing for faster indexing across large datasets.

Challenges and Optimizations

Managing multiple threads can lead to issues like race conditions, where multiple threads access shared resources simultaneously. Go’s sync package, with Mutex and WaitGroup, helps avoid these issues by synchronizing data access and ensuring that tasks complete before proceeding to the next step.


Functionality and Features of the Full-Text Search Engine

This full-text search engine leverages Go's concurrency capabilities to build a performant indexing and search mechanism. By using data streaming and multithreading, the application efficiently processes large datasets, such as Wikipedia abstracts, without overloading memory. This section explains the primary functions, features, and key methods used in the code.


1. Core Features of the Search Engine

  • Efficient Indexing: Uses an inverted index to allow quick retrieval of documents matching a query term.
  • Concurrent Processing: Multithreads the document indexing and search operations, enabling fast, non-blocking operations.
  • Document Storage with Metadata: Stores metadata (such as title and URL) alongside indexed content, allowing retrieval of complete document details.
  • Persistence of Indexes: Indexes can be saved to and loaded from disk, allowing for reusable search indexes across sessions.
  • Data Filtering and Normalization: Includes stopword removal, case normalization, and stemming to standardize search tokens.

2. Key Components and Functionality

a. Document Loading and Streaming

The LoadDocuments function handles the loading of documents from a compressed XML file, decompressing and parsing it as a stream. This approach is memory-efficient and particularly useful for large datasets.

Code Snippet: LoadDocuments

// LoadDocuments loads documents from a gzip-compressed XML file and sends them through a channel.
func LoadDocuments(path string, docChan chan<- Document) error {
    f, err := os.Open(path)
    if err != nil {
        return err
    }
    defer f.Close()

    gz, err := gzip.NewReader(f)
    if err != nil {
        return err
    }
    defer gz.Close()

    dec := xml.NewDecoder(gz)
    dump := struct {
        Documents []Document `xml:"doc"`
    }{}

    if err := dec.Decode(&dump); err != nil {
        return err
    }

    for i, doc := range dump.Documents {
        doc.ID = i
        docChan <- doc
    }
    return nil
}
Enter fullscreen mode Exit fullscreen mode

Here:

  • The XML file is decompressed and parsed on the go, meaning the entire file isn’t loaded at once.
  • Documents are then streamed to a channel, docChan, allowing them to be processed as soon as they’re loaded, ideal for concurrent indexing.

b. Tokenization and Text Analysis

The tokenizer.go file includes functions to normalize and standardize text through tokenization, case normalization, stopword removal, and stemming.

Code Snippet: analyze

// analyze analyzes the text and returns a slice of tokens.
func analyze(text string) []string {
    tokens := tokenize(text)
    tokens = lowercaseFilter(tokens)
    tokens = stopwordFilter(tokens)
    tokens = stemmerFilter(tokens)
    return tokens
}
Enter fullscreen mode Exit fullscreen mode

This function:

  • Tokenizes text into individual words or tokens.
  • Converts tokens to lowercase to ensure case-insensitivity.
  • Removes stopwords, reducing unnecessary data in the index.
  • Stems tokens to their root forms, ensuring search consistency (e.g., “running” becomes “run”).

c. Building and Managing the Inverted Index

The Index struct is the core data structure, holding the inverted index and document store. The inverted index is a map where each token (word) maps to a list of document IDs containing that word, allowing efficient searching.

Code Snippet: Adding Documents to Index

// AddDocument adds a single document to the index.
func (idx *Index) AddDocument(doc Document) {
    idx.mu.Lock()
    defer idx.mu.Unlock()

    idx.docStore[doc.ID] = doc
    for _, token := range analyze(doc.Text) {
        ids := idx.index[token]
        if ids != nil && ids[len(ids)-1] == doc.ID {
            continue
        }
        idx.index[token] = append(ids, doc.ID)
    }
}
Enter fullscreen mode Exit fullscreen mode

The AddDocument function:

  • Locks the index to prevent race conditions during concurrent writes.
  • Stores documents by ID in docStore, enabling full-text retrieval by ID.
  • Builds the inverted index by processing each token in the document and adding its ID to the token’s list, ensuring fast lookup.

Storing and Retrieving Indexes

To allow persistent use of the index, the Save and Load methods in index.go use Go’s encoding/gob package for serialization and deserialization.

// Save serializes both the index and docStore to a file.
func (idx *Index) Save(filePath string) error {
    idx.mu.RLock()
    defer idx.mu.RUnlock()

    file, err := os.Create(filePath)
    if err != nil {
        return err
    }
    defer file.Close()

    encoder := gob.NewEncoder(file)
    if err := encoder.Encode(idx.index); err != nil {
        return err
    }
    if err := encoder.Encode(idx.docStore); err != nil {
        return err
    }

    return nil
}
Enter fullscreen mode Exit fullscreen mode

d. Concurrent Document Indexing with Streaming

Using the AddStreamed method, documents from docChan are indexed concurrently. Multiple goroutines handle the document addition process, significantly speeding up indexing for large datasets.

Code Snippet: AddStreamed

// AddStreamed adds documents from a channel to the index concurrently.
func (idx *Index) AddStreamed(docChan <-chan Document) {
    var wg sync.WaitGroup
    numWorkers := 4 // Number of concurrent workers

    for i := 0; i < numWorkers; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            for doc := range docChan {
                idx.AddDocument(doc)
            }
        }()
    }
    wg.Wait()
}
Enter fullscreen mode Exit fullscreen mode

This method:

  • Spins up multiple goroutines to process documents in parallel.
  • Uses a WaitGroup to wait until all goroutines have completed, ensuring that all documents are processed before proceeding.

e. Searching for Documents

The Search function in index.go allows for efficient retrieval of document IDs matching a search query by finding documents that contain all query tokens.

Code Snippet: Search

// Search looks up document IDs matching the given query.
func (idx *Index) Search(text string) []int {
    var result []int
    for _, token := range analyze(text) {
        idx.mu.RLock()
        if ids, ok := idx.index[token]; ok {
            if result == nil {
                result = ids
            } else {
                result = Intersection(result, ids)
            }
        } else {
            idx.mu.RUnlock()
            return nil
        }
        idx.mu.RUnlock()
    }
    return result
}
Enter fullscreen mode Exit fullscreen mode

The Search function:

  • Analyzes the query text into tokens, then checks if each token exists in the index.
  • Finds the intersection of IDs for each token, returning only documents that contain all terms in the query.

Displaying Search Results

The PrintResultsTable method formats and displays the matched document IDs with titles and text snippets for readability.

// PrintResultsTable formats and prints search results in a table.
func (idx *Index) PrintResultsTable(matchedIDs []int) {
    fmt.Printf("\n%-10s | %-40s | %s\n", "Doc ID", "Title", "Snippet")
    fmt.Println(strings.Repeat("-", 105))
    for _, id := range matchedIDs {
        if doc, found := idx.GetDocumentByID(id); found {
            snippet := doc.Text
            if len(snippet) > 50 {
                snippet = snippet[:47] + "..."
            }
            fmt.Printf("%-10d | %-40s | %s\n", doc.ID, doc.Title, snippet)
        }
    }
    fmt.Println(strings.Repeat("-", 105))
}
Enter fullscreen mode Exit fullscreen mode

This table view is helpful for quick verification and readability of the results, as it includes a snippet of each matching document's text.


7. Future Scope

This full-text search engine is a solid foundation for building a comprehensive search system, but there are several enhancements that could make it even more powerful and feature-rich:

1. Distributed Processing

  • Goal: Scaling the search engine to handle an even larger volume of data by distributing the workload across multiple machines.
  • Implementation: By distributing document indexing and querying across servers, the search engine can handle more queries and larger datasets. Technologies like gRPC or HTTP/2 could facilitate efficient communication between distributed nodes.

2. Advanced Query Support

  • Goal: Allow users to perform more sophisticated searches using operators (e.g., AND, OR, NOT) and proximity queries.
  • Implementation: Extend the indexing algorithm to support complex queries, such as exact phrases and wildcard searches, enhancing search flexibility.

3. Real-Time Index Updates

  • Goal: Enable the engine to update indexes dynamically as new documents are added.
  • Implementation: A real-time indexing feature would allow new documents to be added without requiring a complete reindex, making it ideal for applications that handle frequently updated content.

4. Machine Learning Integration for Ranking

  • Goal: Improve result relevancy by incorporating machine learning models to rank documents based on user behavior and relevance.
  • Implementation: By analyzing past search data and user preferences, the engine could prioritize more relevant documents, making search results more accurate and personalized.

5. Improved Natural Language Processing (NLP)

  • Goal: Use NLP to improve tokenization, stemming, and synonym support, enabling the engine to handle user queries more intuitively.
  • Implementation: Leveraging NLP techniques would enhance query matching by accounting for synonyms, plurals, and context, improving the engine’s ability to interpret user intent.

8. Results Screenshot

Search Text Matrix


9. Conclusion

Building a full-text search engine using Go is a practical project for understanding complex programming concepts like concurrency, multithreading, and data streaming. This project demonstrates Go’s ability to handle large datasets efficiently while maintaining high performance. By focusing on efficient indexing and multithreaded processing, this search engine achieves impressive speed and memory efficiency.

Through this process, we explored critical components of search engines—streaming, tokenization, inverted indexing, and multithreading—and saw how these elements come together to create a responsive and resource-conscious search solution. With potential enhancements like distributed processing and NLP integration, this search engine can evolve further, offering even greater capabilities.

The experience gained here not only showcases Go’s performance but also serves as a foundation for building scalable, real-world applications that can meet the demands of data-heavy environments.

package utils
import (
"compress/gzip"
"encoding/xml"
"os"
)
// Document represents a Wikipedia abstract dump document.
type Document struct {
Title string `xml:"title"`
URL string `xml:"url"`
Text string `xml:"abstract"`
ID int
}
// LoadDocuments loads documents from a gzip-compressed XML file and sends them through a channel.
func LoadDocuments(path string, docChan chan<- Document) error {
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
gz, err := gzip.NewReader(f)
if err != nil {
return err
}
defer gz.Close()
dec := xml.NewDecoder(gz)
dump := struct {
Documents []Document `xml:"doc"`
}{}
if err := dec.Decode(&dump); err != nil {
return err
}
for i, doc := range dump.Documents {
doc.ID = i
docChan <- doc
}
return nil
}
view raw document.go hosted with ❤ by GitHub
package utils
import (
"strings"
snowballeng "github.com/kljensen/snowball/english"
)
// lowercaseFilter returns a slice of tokens normalized to lower case.
func lowercaseFilter(tokens []string) []string {
r := make([]string, len(tokens))
for i, token := range tokens {
r[i] = strings.ToLower(token)
}
return r
}
// stopwordFilter returns a slice of tokens with stop words removed.
func stopwordFilter(tokens []string) []string {
var stopwords = map[string]struct{}{
"a": {}, "and": {}, "be": {}, "have": {}, "i": {},
"in": {}, "of": {}, "that": {}, "the": {}, "to": {},
}
r := make([]string, 0, len(tokens))
for _, token := range tokens {
if _, ok := stopwords[token]; !ok {
r = append(r, token)
}
}
return r
}
// stemmerFilter returns a slice of stemmed tokens.
func stemmerFilter(tokens []string) []string {
r := make([]string, len(tokens))
for i, token := range tokens {
r[i] = snowballeng.Stem(token, false)
}
return r
}
view raw filter.go hosted with ❤ by GitHub
module github.com/ravikisha/go-full-text-search
go 1.14
require (
github.com/kljensen/snowball v0.6.0
github.com/stretchr/testify v1.6.1
)
view raw go.mod hosted with ❤ by GitHub
package utils
import (
"encoding/gob"
"fmt"
"os"
"strings"
"sync"
)
// Index represents an inverted index and stores documents by ID.
type Index struct {
mu sync.RWMutex
index map[string][]int
docStore map[int]Document // Store documents by their ID for retrieval
}
// get the top 10 indexes
func (idx *Index) GetTop10Indexes() {
var i int = 0
for k, v := range idx.index {
if i == 10 {
break
}
fmt.Println(k, v)
i++
}
}
// get the top 10 documents
func (idx *Index) GetTop10Documents() {
var i int = 0
for k, v := range idx.docStore {
if i == 10 {
break
}
fmt.Println(k, v)
i++
}
}
// NewIndex creates and returns a new Index.
func NewIndex() *Index {
return &Index{
index: make(map[string][]int),
docStore: make(map[int]Document),
}
}
// Save serializes both the index and docStore to a file.
func (idx *Index) Save(filePath string) error {
idx.mu.RLock()
defer idx.mu.RUnlock()
file, err := os.Create(filePath)
if err != nil {
return err
}
defer file.Close()
encoder := gob.NewEncoder(file)
// Encode both index and docStore
if err := encoder.Encode(idx.index); err != nil {
return err
}
if err := encoder.Encode(idx.docStore); err != nil {
return err
}
return nil
}
// Load deserializes both the index and docStore from a file.
func (idx *Index) Load(filePath string) error {
idx.mu.Lock()
defer idx.mu.Unlock()
file, err := os.Open(filePath)
if err != nil {
return err
}
defer file.Close()
decoder := gob.NewDecoder(file)
// Decode both index and docStore
if err := decoder.Decode(&idx.index); err != nil {
return err
}
if err := decoder.Decode(&idx.docStore); err != nil {
return err
}
return nil
}
// AddDocument adds a single document to the index.
func (idx *Index) AddDocument(doc Document) {
idx.mu.Lock()
defer idx.mu.Unlock()
idx.docStore[doc.ID] = doc
for _, token := range analyze(doc.Text) {
ids := idx.index[token]
if ids != nil && ids[len(ids)-1] == doc.ID {
continue
}
idx.index[token] = append(ids, doc.ID)
}
}
// AddStreamed adds documents from a channel to the index concurrently.
func (idx *Index) AddStreamed(docChan <-chan Document) {
var wg sync.WaitGroup
numWorkers := 4 // Number of concurrent workers
for i := 0; i < numWorkers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for doc := range docChan {
idx.AddDocument(doc)
}
}()
}
wg.Wait()
}
// Search looks up document IDs matching the given query.
func (idx *Index) Search(text string) []int {
var result []int
for _, token := range analyze(text) {
idx.mu.RLock()
if ids, ok := idx.index[token]; ok {
if result == nil {
result = ids
} else {
result = Intersection(result, ids)
}
} else {
idx.mu.RUnlock()
return nil
}
idx.mu.RUnlock()
}
return result
}
// Intersection returns the set intersection of two sorted integer slices.
func Intersection(a, b []int) []int {
var i, j int
result := []int{}
for i < len(a) && j < len(b) {
if a[i] < b[j] {
i++
} else if a[i] > b[j] {
j++
} else {
result = append(result, a[i])
i++
j++
}
}
return result
}
// GetDocumentByID retrieves a document by its ID from the document store.
func (idx *Index) GetDocumentByID(id int) (Document, bool) {
idx.mu.RLock()
defer idx.mu.RUnlock()
doc, exists := idx.docStore[id]
return doc, exists
}
// PrintResultsTable formats and prints search results in a table.
func (idx *Index) PrintResultsTable(matchedIDs []int) {
fmt.Printf("\n%-10s | %-40s | %s\n", "Doc ID", "Title", "Snippet")
fmt.Println(strings.Repeat("-", 105))
for _, id := range matchedIDs {
if doc, found := idx.GetDocumentByID(id); found {
snippet := doc.Text
if len(snippet) > 50 {
snippet = snippet[:47] + "..."
}
fmt.Printf("%-10d | %-40s | %s\n", doc.ID, doc.Title, snippet)
}
}
fmt.Println(strings.Repeat("-", 105))
}
view raw index.go hosted with ❤ by GitHub
package main
import (
"flag"
"log"
"strings"
"sync"
"time"
utils "github.com/ravikisha/go-full-text-search/utils"
)
func main() {
var query, indexPath, files string
flag.StringVar(&query, "q", "small cat", "search query")
flag.StringVar(&indexPath, "index", "index.gob", "path to save/load the index")
flag.StringVar(&files, "p", "enwiki-latest-abstract1.xml.gz", "comma-separated list of wiki abstract dump paths")
flag.Parse()
if files == "" {
log.Fatal("Please provide at least one file path using the -p flag.")
}
filePaths := strings.Split(files, ",")
log.Println("Running Full Text Search")
idx := utils.NewIndex()
start := time.Now()
// Attempt to load index from file if available
if err := idx.Load(indexPath); err == nil {
log.Printf("Loaded index from %s in %v", indexPath, time.Since(start))
} else {
// Load and index documents concurrently from multiple files
var wg sync.WaitGroup
docChan := make(chan utils.Document, 100)
// Goroutine to handle document indexing as they arrive
go func() {
idx.AddStreamed(docChan)
}()
// Start loading each file in a separate goroutine
for _, path := range filePaths {
wg.Add(1)
go func(p string) {
defer wg.Done()
log.Printf("Loading documents from %s", p)
if err := utils.LoadDocuments(p, docChan); err != nil {
log.Printf("Failed to load documents from %s: %v", p, err)
}
}(path)
}
wg.Wait() // Wait for all file loaders
close(docChan) // Close document channel after loading completes
log.Printf("Indexed documents in %v", time.Since(start))
// Save index for future use
if err := idx.Save(indexPath); err != nil {
log.Fatalf("Failed to save index: %v", err)
}
log.Printf("Saved index to %s", indexPath)
}
// After searching for documents
matchedIDs := idx.Search(query)
log.Printf("Search found %d documents in %v", len(matchedIDs), time.Since(start))
idx.PrintResultsTable(matchedIDs)
}
view raw main.go hosted with ❤ by GitHub
package utils
import (
"strings"
"unicode"
)
// tokenize returns a slice of tokens for the given text.
func tokenize(text string) []string {
return strings.FieldsFunc(text, func(r rune) bool {
// Split on any character that is not a letter or a number.
return !unicode.IsLetter(r) && !unicode.IsNumber(r)
})
}
// analyze analyzes the text and returns a slice of tokens.
func analyze(text string) []string {
tokens := tokenize(text)
tokens = lowercaseFilter(tokens)
tokens = stopwordFilter(tokens)
tokens = stemmerFilter(tokens)
return tokens
}
view raw tokenizer.go hosted with ❤ by GitHub

Image of Timescale

Timescale – the developer's data platform for modern apps, built on PostgreSQL

Timescale Cloud is PostgreSQL optimized for speed, scale, and performance. Over 3 million IoT, AI, crypto, and dev tool apps are powered by Timescale. Try it free today! No credit card required.

Try free

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Discover a treasure trove of wisdom within this insightful piece, highly respected in the nurturing DEV Community enviroment. Developers, whether novice or expert, are encouraged to participate and add to our shared knowledge basin.

A simple "thank you" can illuminate someone's day. Express your appreciation in the comments section!

On DEV, sharing ideas smoothens our journey and strengthens our community ties. Learn something useful? Offering a quick thanks to the author is deeply appreciated.

Okay