DEV Community

Jimmy
Jimmy

Posted on

Building My Smart 2nd Brain, Part 4: The Art of Documents Searching

I originally planned to wrap up this series in this part. However, while reviewing my code, I realized there’s a section that could greatly benefit from some additional attention and refinement before concluding this mini-project.

That section is about documents searching.

Chunking and Storing: Connecting Pieces Together

Let's see what our store_node does now:

def store_node(self, state: KnowledgeState):

        if state.embeddings and state.chunks:
            try:
                # Initialize vectorstore if not provided
                if not self.vectorstore:
                    self.vectorstore = Chroma(
                        collection_name="smart_second_brain",
                        embedding_function=self.embedding_model,
                        persist_directory=self.chromadb_dir
                    )

                # Prepare metadata for each chunk
                # Note: ChromaDB doesn't accept lists in metadata, so we join categories
                metadatas = [
                    {
                        "source": state.source or "unknown",
                        "categories": ", ".join(state.categories) if state.categories else "general",
                        "chunk_id": i
                    }
                    for i in range(len(state.chunks))
                ]

                # Store chunks with metadata in ChromaDB
                self.vectorstore.add_texts(
                    texts=state.chunks,
                    metadatas=metadatas
                )

                # Note: Newer ChromaDB versions don't require explicit persist()
                # Data is automatically persisted when using persist_directory

                state.status = "stored"
                state.logs = (state.logs or []) + [
                    f"Stored {len(state.chunks)} chunks in ChromaDB with categories {state.categories or ['general']}"
                ]
            except Exception as e:
                state.status = "error"
                state.logs = (state.logs or []) + [f"Storing failed: {e}"]
        else:
            state.logs = (state.logs or []) + ["No embeddings/chunks to store"]

        return state
Enter fullscreen mode Exit fullscreen mode

The program uses a predefined list of categories, provided as a state input for the metadata. The chunk size is predetermined using the RecursiveCharacterTextSplitter.

# Prepare metadata for each chunk
# Note: ChromaDB doesn't accept lists in metadata, so we join categories

metadatas = [
                {
                    "source": state.source or "unknown",
                    "categories": ", ".join(state.categories) if state.categories else "general",
                    "chunk_id": i
                }
                for i in range(len(state.chunks))
            ]

            # Store chunks with metadata in ChromaDB
            self.vectorstore.add_texts(
                texts=state.chunks,
                metadatas=metadatas
            )
Enter fullscreen mode Exit fullscreen mode

The current version employs a very basic chunking and storage mechanism:

  • There are no document identifiers, meaning chunks from multiple documents are indistinguishable.
  • Chunk metadata is minimal—just a concatenated string of category keywords and a numeric chunk ID.

This bare-bones approach introduces several limitations:

  • No persistent identifiers: Without stable doc_id or chunk_id, it’s difficult to track, update, or delete individual chunks.

  • Lossy metadata: Metadata like categories is compressed into comma-separated strings, which makes it unreliable for filtering or matching individual labels.

  • Lack of safeguards: There are no mechanisms to handle missing metadata.

Overall, this approach makes it challenging to manage chunks effectively, trace their origins, or maintain metadata integrity.

Therefore, I provide the following changes:

if state.raw_document:
            doc_id = state.doc_id or f"doc_{uuid.uuid4()}"
            state.doc_id = doc_id
            state.ingested_at = state.ingested_at or datetime.datetime.utcnow().isoformat()

            splitter = RecursiveCharacterTextSplitter(
                chunk_size=500,
                chunk_overlap=50,
            )
            chunks = splitter.split_text(state.raw_document)
            state.chunks = chunks
            state.chunk_metadata = []

            for index, chunk in enumerate(chunks):
                metadata = {
                    "doc_id": doc_id,
                    "chunk_index": index,
                    "chunk_id": f"{doc_id}::chunk_{index}",
                    "source": state.source or "unknown",
                    "categories": state.categories or [],
                    "keywords": (state.auto_generated_keywords or []) or (
                        state.metadata.get("keywords") if state.metadata else []
                    ),
                    "ingested_at": state.ingested_at,
                    "knowledge_type": state.knowledge_type or "conversational",
                    "char_start": state.raw_document.find(chunk),
                }
                metadata["char_end"] = metadata["char_start"] + len(chunk) if metadata["char_start"] >= 0 else None
                state.chunk_metadata.append(metadata)

            state.logs = (state.logs or []) + [
                f"Chunked document into {len(state.chunks)} chunks with doc_id={doc_id}"
            ]
        else:
            state.logs = (state.logs or []) + ["No raw_document found for chunking"]

        return state
Enter fullscreen mode Exit fullscreen mode
  • There is a document id that connects every chunk from the same document.
  • There is a chunk id defined as well within a document.
  • The list of keywords becomes a list type instead of a concatenated string.
  • Characters offset (char_start, char_end) are kept as well.

And the store_node needs to be refined as well:

if state.embeddings and state.chunks:
            try:
                # Initialize vectorstore if not provided
                if not self.vectorstore:
                    self.vectorstore = Chroma(
                        collection_name=self.collection_name,
                        embedding_function=self.embedding_model,
                        persist_directory=self.chromadb_dir
                    )

                metadatas = []
                ids = []
                for idx, chunk in enumerate(state.chunks):
                    base_metadata = state.chunk_metadata[idx] if state.chunk_metadata and idx < len(state.chunk_metadata) else {}
                    metadata = {
                        "doc_id": base_metadata.get("doc_id", state.doc_id or "unknown"),
                        "chunk_index": base_metadata.get("chunk_index", idx),
                        "chunk_id": base_metadata.get("chunk_id", f"chunk_{idx}"),
                        "source": base_metadata.get("source", state.source or "unknown"),
                        "categories": base_metadata.get("categories", state.categories or []),
                        "ingested_at": base_metadata.get("ingested_at", state.ingested_at),
                        "knowledge_type": base_metadata.get("knowledge_type", state.knowledge_type or "conversational"),
                        "char_start": base_metadata.get("char_start"),
                        "char_end": base_metadata.get("char_end"),
                    }
                    metadata.update(state.metadata or {})

                    safe_metadata = {}
                    for key, value in metadata.items():
                        if isinstance(value, (str, int, float, bool)) or value is None:
                            safe_metadata[key] = value
                        elif isinstance(value, list):
                            safe_metadata[key] = ", ".join(str(v) for v in value)
                        else:
                            safe_metadata[key] = str(value)

                    metadatas.append(safe_metadata)
                    ids.append(metadata["chunk_id"])

                documents = [
                    Document(page_content=chunk, metadata=meta)
                    for chunk, meta in zip(state.chunks, metadatas)
                ]

                # Store chunks in vector store and update hybrid retriever if available
                if self.vectorstore:
                    if hasattr(self.vectorstore, "add_documents"):
                        self.vectorstore.add_documents(documents, ids=ids)
                    elif hasattr(self.vectorstore, "add_texts"):
                        self.vectorstore.add_texts([d.page_content for d in documents], metadatas=metadatas, ids=ids)

                if self.retriever:
                    self.retriever.add_documents(documents)

                # Note: Newer ChromaDB versions don't require explicit persist()
                # Data is automatically persisted when using persist_directory

                state.status = "stored"
                state.logs = (state.logs or []) + [
                    f"Stored {len(state.chunks)} chunks in ChromaDB with categories {state.categories or ['general']}"
                ]
            except Exception as e:
                state.status = "error"
                state.logs = (state.logs or []) + [f"Storing failed: {e}"]
        else:
            state.logs = (state.logs or []) + ["No embeddings/chunks to store"]

Enter fullscreen mode Exit fullscreen mode
  • The meta data field now becomes a proper JSON object that includes all relevant data.
  • The LangChain's document object is used here for storing document chunks, so it is running add_documents instead of add_texts before. It keep the vector store aligned with LangChain’s document model.

Preprocessing for Keywords Extraction

The current version requires an explicitly specified list of keywords for meta data. For a single PDF it may be fine, but how about a bunch of documents that are about different topics?

We are doing an AI system, leaving such unresolved pain point is something I cannot compromise.

To handle this, I add an additional node in the ingestion flow, called 'preprocess':

# =============================================================================
        # INGESTION BRANCH
        # =============================================================================

        # Document processing pipeline
        graph.add_edge("preprocess", "chunk")
        graph.add_edge("chunk", "embed")      # Chunk -> Embed
        graph.add_edge("embed", "store")     # Embed -> Store
        graph.add_edge("store", END)         # Store -> End
Enter fullscreen mode Exit fullscreen mode

The preprocess node activates only during ingestion requests when auto_preprocess is true. It reads the raw document, tokenizes it—preferring spaCy and falling back to a regex tokenizer if spaCy is unavailable—filters the resulting tokens, and builds frequency counts. The highest-frequency survivors become up to ten auto-generated keywords and five categories, which are merged into the state’s metadata before logging a summary for downstream stages.

def preprocess_node(self, state: KnowledgeState):
        """
        Perform optional automated metadata extraction prior to chunking.

        When auto_preprocess is enabled, this node analyses the raw document to
        infer keywords and categories. When disabled, it ensures the caller has
        supplied sufficient metadata; otherwise the ingestion run is skipped.
        """
        if state.query_type != "ingest":
            return state

        if not state.raw_document:
            state.logs = (state.logs or []) + ["⚠️ No raw document present for preprocessing"]
            return state

        if not state.auto_preprocess:
            has_user_metadata = bool((state.categories or []) or (state.metadata or {}))
            if not has_user_metadata:
                state.status = "skipped_no_metadata"
                state.logs = (state.logs or []) + [
                    "⚠️ Ingestion skipped: auto_preprocess disabled and no metadata provided"
                ]
            return state

        # --- Tokenize and normalize text ---
        filtered_tokens: list[str] = []
        nlp = get_spacy_nlp()
        if nlp is not None:
            doc = nlp(state.raw_document)
            for token in doc:
                lemma = token.lemma_.lower().strip()
                if (
                    lemma
                    and lemma.isalpha()
                    and len(lemma) >= 4
                    and lemma not in SKLEARN_STOPWORDS
                ):
                    filtered_tokens.append(lemma)
        else:
            text = state.raw_document.lower()
            tokens = re.findall(r"[a-zA-Z]{4,}", text)
            filtered_tokens = [token for token in tokens if token not in SKLEARN_STOPWORDS]

        if not filtered_tokens:
            state.logs = (state.logs or []) + [
                "ℹ️ Auto preprocessing found no candidate keywords"
            ]
            return state

        counter = Counter(filtered_tokens)
        keywords = [word for word, _ in counter.most_common(10)]
        categories = keywords[:5]

        state.auto_generated_keywords = keywords
        state.auto_generated_categories = categories

        # Merge inferred categories with any caller-supplied ones while preserving order
        existing_categories = state.categories or []
        merged_categories = list(dict.fromkeys(existing_categories + categories))
        state.categories = merged_categories

        state.metadata = state.metadata or {}
        if keywords:
            state.metadata.setdefault("keywords", ", ".join(keywords))
            state.metadata.setdefault("auto_generated_keywords", ", ".join(keywords))
        if merged_categories:
            state.metadata.setdefault("categories", ", ".join(merged_categories))
            state.metadata.setdefault("auto_generated_categories", ", ".join(categories))
        state.metadata.setdefault("auto_preprocess", True)

        state.logs = (state.logs or []) + [
            f"🔍 Auto preprocessing inferred keywords={keywords[:5]} categories={categories[:3]}"
        ]

        return state

Enter fullscreen mode Exit fullscreen mode

A curated stopword list is essential because common glue words (“the,” “with,” “this,” etc.) dominate naive frequency counts yet carry little semantic value. Stripping them away ensures the remaining tallies highlight genuinely informative terms, yielding cleaner metadata and more accurate retrieval filters.

spaCy is brought in because its language model provides well-tested tokenization and morphological analysis—capabilities that lightweight regex tokenizers lack. When spaCy loads successfully it supplies part-of-speech tags and lemmas, enabling the node to recognize tokens accurately, respect context, and normalize linguistic variants. If the model isn’t available the workflow still proceeds, but with reduced sophistication.

spaCy lemmatization specifically converts each token to its base dictionary form while considering grammatical context. For example, “running,” “ran,” and “runs” all normalize to “run,” and context-sensitive cases like “better” as an adjective become “good.” This consolidation prevents inflectional variants from fragmenting the term frequencies that seed keyword and category extraction.

Raw sentence:
"Analysts were studying studies of the companies' analyses while runners ran, running into numerous hurdles."

Raw tokens (after simple cleanup):
analysts, were, studying, studies, of, the, companies, analyses, while, runners, ran, running, into, numerous, hurdles

Unique raw tokens: 15

Lemmas (spaCy-style lemmatization):
analyst, be, study, study, of, the, company, analysis, while, runner, run, run, into, numerous, hurdle

Unique lemmas: 12

Enter fullscreen mode Exit fullscreen mode

At the end, the preprocess_node enriches the list of keywords for a document, it makes embedding and storing document batches much easier.

Retrieval: Another Side of the Coin

The current version uses the standard retriever provided by the vector object, performs the similarity search for the top 5 documents.

try:
            # Try different retrieval strategies in order of preference
            if self.retriever:
                logger.info(f"🔍 Retrieving docs for query: {state.user_input}")
                results = self.retriever.get_relevant_documents(state.user_input)
            elif self.vectorstore:
                logger.info(f"🔍 Using vectorstore.similarity_search for query: {state.user_input}")
                results = self.vectorstore.similarity_search(state.user_input, k=5)
            else:
                results = []

            # Format retrieved documents for downstream processing
            state.retrieved_docs = [{"content": r.page_content, "metadata": r.metadata} for r in results]
            state.logs = (state.logs or []) + [f"Retrieved {len(state.retrieved_docs)} docs"]

        except Exception as e:
            logger.error(f"❌ Retrieval failed: {e}")
            state.retrieved_docs = []
            state.logs = (state.logs or []) + [f"Retrieval error: {str(e)}"]

Enter fullscreen mode Exit fullscreen mode

The default get_relevant_documents call in LangChain relies on embedding-based similarity search. In practice, it follows these steps:

  1. Convert the query to an embedding – The same embedding model used at ingestion time (e.g., OpenAI, Azure, etc.) transforms the user’s question into a high-dimensional vector.
  2. Perform a nearest-neighbor lookup – The retriever asks the vector database (Chroma in this workflow) for the k stored vectors that sit closest to the query vector under a similarity metric such as cosine similarity or inner product.
  3. Return the top matches as Document objects – Each hit carries both the chunk text and its metadata, enabling downstream filtering or prompting.

However, relying solely on embedding-based similarity search will have the following problems:

  1. Missing exact matches when the embedding fails to capture critical keywords, acronyms, or newly coined terms.
  2. Returning noisy passages because long or multi-topic chunks get averaged into a single vector.
  3. Losing document context, as chunks are treated independently with no notion of surrounding structure.

In real production setup, hybrid searching style will be preferred, which is what I introduce below:

def retriever_node(self, state: KnowledgeState):
        """
        Retrieve relevant documents from the vector database.

        This node performs similarity search to find the most relevant
        documents for a given user query. Supports multiple retrieval
        strategies with fallback mechanisms.

        Args:
            state: KnowledgeState object containing user_input

        Returns:
            KnowledgeState: Updated state with retrieved_docs array

        Retrieval Strategies:
            1. Dedicated retriever (if configured)
            2. Vectorstore similarity search (fallback)
            3. Empty results (if both unavailable)

        Processing:
            - Performs semantic search using user query
            - Retrieves top-k most relevant documents
            - Formats results for downstream processing
            - Handles retrieval errors gracefully
            - Logs retrieval operations and statistics
        """
        if not state.user_input:
            state.logs = (state.logs or []) + ["No user_input provided for retrieval"]
            state.retrieved_docs = []
            return state

        try:
            retrieval_options = {
                "query": state.user_input,
                "k": (state.retrieval_options or {}).get("k", 8),
                "filters": state.metadata_filters or {},
                "use_hybrid": (state.retrieval_options or {}).get("use_hybrid", True),
                "min_score": (state.retrieval_options or {}).get("min_score", 0.1),
                "rerank_top_k": (state.retrieval_options or {}).get("rerank_top_k", 12),
            }

            retrieval_log = {
                "query": state.user_input,
                "filters": retrieval_options["filters"],
                "use_hybrid": retrieval_options["use_hybrid"],
                "min_score": retrieval_options["min_score"],
                "timestamp": datetime.datetime.utcnow().isoformat(),
            }

            if self.retriever:
                logger.info(f"🔍 Retrieving docs for query: {state.user_input}")
                if hasattr(self.retriever, "retrieve"):
                    results = self.retriever.retrieve(
                        query=retrieval_options["query"],
                        k=retrieval_options["k"],
                        filters=retrieval_options["filters"],
                        use_hybrid=retrieval_options["use_hybrid"],
                        min_score=retrieval_options["min_score"],
                    )
                elif hasattr(self.retriever, "get_relevant_documents"):
                    results = self.retriever.get_relevant_documents(retrieval_options["query"])
                else:
                    results = []

                # Optionally apply reranking if supported
                if (
                    retrieval_options["rerank_top_k"]
                    and hasattr(self.retriever, "retrieve_with_reranking")
                ):
                    results = self.retriever.retrieve_with_reranking(
                        query=retrieval_options["query"],
                        k=retrieval_options["k"],
                        rerank_top_k=retrieval_options["rerank_top_k"],
                    )

                formatted_results = []
                top_results = []
                for result in results:
                    doc = result.document if hasattr(result, "document") else result
                    score = result.score if hasattr(result, "score") else None
                    formatted_results.append({
                        "content": doc.page_content,
                        "metadata": doc.metadata,
                        "score": score,
                    })
                    top_results.append({
                        "chunk_id": doc.metadata.get("chunk_id"),
                        "doc_id": doc.metadata.get("doc_id"),
                        "source": doc.metadata.get("source"),
                        "score": score,
                    })
                state.retrieved_docs = formatted_results
                retrieval_log["results"] = top_results

            elif self.vectorstore:
                logger.info(f"🔍 Using vectorstore.similarity_search for query: {state.user_input}")
                results = self.vectorstore.similarity_search(
                    retrieval_options["query"],
                    k=retrieval_options["k"],
                    filter=retrieval_options["filters"] or None,
                )
                state.retrieved_docs = [
                    {"content": r.page_content, "metadata": r.metadata, "score": None}
                    for r in results
                ]
                retrieval_log["results"] = [
                    {
                        "chunk_id": r.metadata.get("chunk_id"),
                        "doc_id": r.metadata.get("doc_id"),
                        "source": r.metadata.get("source"),
                        "score": None,
                    }
                    for r in results
                ]
            else:
                state.retrieved_docs = []
                retrieval_log["results"] = []

            state.retrieval_log = (state.retrieval_log or []) + [retrieval_log]
            state.logs = (state.logs or []) + [f"Retrieved {len(state.retrieved_docs)} docs"]

        except Exception as e:
            logger.error(f"❌ Retrieval failed: {e}")
            state.retrieved_docs = []
            state.retrieval_log = (state.retrieval_log or []) + [{
                "query": state.user_input,
                "error": str(e),
                "timestamp": datetime.datetime.utcnow().isoformat(),
            }]
            state.logs = (state.logs or []) + [f"Retrieval error: {str(e)}"]

        return state

Enter fullscreen mode Exit fullscreen mode

There are more retrieval options available:

retrieval_options = {
                "query": state.user_input,
                "k": (state.retrieval_options or {}).get("k", 8),
                "filters": state.metadata_filters or {},
                "use_hybrid": (state.retrieval_options or {}).get("use_hybrid", True),
                "min_score": (state.retrieval_options or {}).get("min_score", 0.1),
                "rerank_top_k": (state.retrieval_options or {}).get("rerank_top_k", 12),
            }
Enter fullscreen mode Exit fullscreen mode

Apart from query and k that we already know, here are three new parameters:

  • filters – metadata filter dict; usually state.metadata_filters coming from the API/UI (e.g., filter by source, categories, knowledge type).
  • use_hybrid – toggles the hybrid ensemble (semantic + BM25). Defaults True; if False it forces pure vector search.
  • min_score – minimum similarity threshold; results below this are dropped. Defaults to 0.1.
  • rerank_top_k – how many hits to pull before the optional reranker trims them back down to k. Here, grab 12, rerank, then return the best k.

To provide this features, we cannot use the generic retriever provided by chroma's as_retriever() function. Here we do a customized retriever called SmartDocumentRetriever.

A Smarter Document Retriever

The SmartDocumentRetriever performs the following:

  • running the standard embedding-based similarity search, and
  • running the BM25 search,
  • combine the result with a 0.7/0.3 weighting (0.7 embedding similarity search, 0.3 BM25),
  • finally rerank the retrieved documents.

But what is BM25?

BM25 is a keyword-based search algorithm that ranks documents based on the relevance of their terms to a given query. It calculates a score by considering term frequency (TF), document length, and how rare a term is across the entire collection of documents to give more weight to important keywords.

BM25 shines when exact or near-exact word matches matter (names, codes, jargon), bit misses purely semantic matches (synonyms or paraphrases) because it relies on lexical overlap.

In order to support BM25 search, the BM25 indexes must be created when the document is first ingested. It also means if the application is restarted, it must be rebuilt.

def add_documents(self, documents: List[Document]) -> None:
        """
        Add documents to the vectorstore.

        Args:
            documents: List of Document objects to add
        """
        try:
            # Add to vectorstore
            self.vectorstore.add_documents(documents)

            # Update BM25 retriever
            if self.bm25_retriever is None:
                self.bm25_retriever = BM25Retriever.from_documents(documents)
            else:
                # Add new documents to existing BM25
                self.bm25_retriever.add_documents(documents)

            # Update ensemble retriever
            self._update_ensemble_retriever()

            logger.info(f"✅ Added {len(documents)} documents to retriever")

        except Exception as e:
            logger.error(f"❌ Failed to add documents: {e}")
Enter fullscreen mode Exit fullscreen mode

When the MasterGraphBuilder is first created, all documents from the vector store will be retrieved and sent to the SmartDocumentRetreiver for running the "hydrate_bm25()" function to rebuild the indexes.

SmartDocumentRetriever combines both BM25 and embedding-based similarity search to produce the best result.

Before returning the final list of documents, SmartDocumentRetriever will perform one more reranking step to make sure the results are most relevant. It applies a simple, hand-coded reranker that nudges longer, categorized, time-stamped chunks upward.

It is just a very simple self-baked implementation of reranker. In production use, you may consider using a more robust approach such as cross-encoder or LLM based reranker.

def retrieve_with_reranking(
        self,
        query: str,
        k: int = 5,
        rerank_top_k: int = 20
    ) -> List[RetrievalResult]:
        """
        Retrieve documents with re-ranking for better relevance.

        Args:
            query: Search query
            k: Final number of documents to return
            rerank_top_k: Number of documents to retrieve initially for re-ranking

        Returns:
            Re-ranked list of RetrievalResult objects
        """
        # First, retrieve more documents than needed
        initial_results = self.retrieve(
            query=query,
            k=rerank_top_k,
            use_hybrid=True
        )

        if len(initial_results) <= k:
            return initial_results

        # Simple re-ranking based on content length and keyword matches
        # In practice, you might use a cross-encoder or other re-ranking model
        reranked_results = []
        for result in initial_results:
            # Boost score based on content quality indicators
            boost = 0.0

            # Prefer longer, more informative content
            content_length = len(result.document.page_content)
            if content_length > 500:
                boost += 0.1

            # Prefer content with categories/metadata
            if result.document.metadata.get("categories"):
                boost += 0.05

            # Prefer recent content (if timestamp available)
            if "timestamp" in result.document.metadata:
                boost += 0.05

            result.score = min(result.score + boost, 1.0)
            reranked_results.append(result)

        # Sort by new scores and return top k
        reranked_results.sort(key=lambda x: x.score, reverse=True)
        return reranked_results[:k]

Enter fullscreen mode Exit fullscreen mode

Final Touch on UI

The UI part also got some refinement.

First of all, I included the configuration pane here:

The most important properties are "Enable Auto Metadata Preprocessing" and "Use Hybrid Retrieval", which are essential for running the preprocess_node, and enabling the ensemble retrieval.

Another enhancement is about the citation support.

If an answer can be provided based on documents in the vector DB, a citation entry will be generated as well that provides the respective document ID and chunk ID, with further enhancements we can support instant referencing to the source documents.

RAG is Challenging

Actually I didn’t anticipate that searching alone would lead to another long article. Along the way, I discovered that managing vector database searches efficiently is a far more challenging topic than I initially thought. Algorithms like TF-IDF and MMR (Maximum Marginal Relevance) quickly overwhelmed me, as someone who considers himself more of an application developer than a mathematician.

While working on the project, I found myself constantly asking my AI assistant to explain the meaning behind certain algorithms. I meticulously scrutinized every piece of code generated by AI tools, performing extensive checks to ensure everything made sense and worked as expected. At least I am confident that this is a working repository and is able to function as what I have demonstrated.

And of course, I indeed learned a lot. :)

Let's take a short break, next round definitely would be the final chapter, if you have been reading the series since part 1, hope you have been enjoying so far and don't miss the last part.

See you next time.

(Source code will be shared when the entire series is completed.)

Top comments (0)