<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Minh Le Duc</title>
    <description>The latest articles on DEV Community by Minh Le Duc (@minh_leduc).</description>
    <link>https://dev.to/minh_leduc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/minh_leduc"/>
    <language>en</language>
    <item>
      <title>How to build an autonomous news Generator with AI using Fluvio</title>
      <dc:creator>Minh Le Duc</dc:creator>
      <pubDate>Mon, 02 Sep 2024 07:47:17 +0000</pubDate>
      <link>https://dev.to/minh_leduc/how-to-build-an-autonomous-news-generator-with-ai-using-fluvio-8g3</link>
      <guid>https://dev.to/minh_leduc/how-to-build-an-autonomous-news-generator-with-ai-using-fluvio-8g3</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Building a News Bot with Fluvio&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In my previous article, I introduced the concept of event-driven architecture (EDA) and demonstrated its capabilities using Fluvio. I showcased how an application could leverage EDA to asynchronously send quotes from a publisher to subscribers at regular intervals.&lt;/p&gt;

&lt;p&gt;In this article, I will expand upon my previous work by introducing additional features that enhance the application's functionality. I will delve into how to integrate a search engine to discover relevant quotes and utilize Large Language Models (LLMs) to summarize these quotes effectively. By combining these elements, I aim to create a more robust and informative application that leverages the power of EDA and AI, named Wipe.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: For better experience, I recommend reading my post on &lt;a href="https://minhleduc.substack.com/p/how-to-build-an-autonomous-news-generator" rel="noopener noreferrer"&gt;Substack&lt;/a&gt;. In addition, I am participating in a quest on Quira, please up vote for me. Here is the &lt;a href="https://quira.sh/repo/MinLee0210-Wipe-844324899" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  What is Wipe?
&lt;/h1&gt;

&lt;p&gt;Tired of Falling Behind in the Fast-Paced World of AI? I understand the frustration of trying to keep up with the constant stream of new technologies and trends in the AI landscape. It can be overwhelming to stay informed about the latest developments while also focusing on building your projects.&lt;/p&gt;

&lt;p&gt;Introducing Wipe, your AI-powered solution. By leveraging a powerful combination of search engines and Large Language Models, Wipe automatically curates the most relevant AI news and condenses it into concise summaries. No more sifting through countless articles. With Wipe, you can stay ahead of the curve and ensure your projects are always built with the latest insights and technologies.&lt;/p&gt;

&lt;h1&gt;
  
  
  Features
&lt;/h1&gt;

&lt;p&gt;As an AI enthusiast, I've often found myself asking: How can I stay up-to-date with the rapid advancements in this field? Is there a way to capture the essence of countless articles in a matter of seconds?&lt;/p&gt;

&lt;p&gt;Wipe, your AI-powered solution, offers the following benefits:&lt;/p&gt;

&lt;p&gt;Real-time Updates: Stay informed about the latest AI trends and breakthroughs.&lt;/p&gt;

&lt;p&gt;Instant Summarization: Use Large Language Models to quickly grasp the key points of articles.&lt;/p&gt;

&lt;p&gt;Workflow&lt;/p&gt;

&lt;p&gt;Data Ingestion: The Publisher continuously collects and ingests raw AI trend data from various sources.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Feature Extraction: The Feature component processes this data, extracting relevant features and insights through techniques like natural language processing and data analysis.&lt;/li&gt;
&lt;li&gt;Content Refinement: The Feature component further refines the extracted content, summarizing key points or providing additional context.&lt;/li&gt;
&lt;li&gt;Notification Distribution: The Notification component sends the processed and refined AI trend updates to interested Consumers. In these settings, fluvio handles this component very pretty.&lt;/li&gt;
&lt;li&gt;Consumer Utilization: Consumers receive these updates and leverage them for their specific AI applications, such as model training, product development, or research.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Event-driven Architecture (EDA)
&lt;/h2&gt;

&lt;p&gt;I would love to remind myself a little bit about the EDA. Event-driven architecture is a design pattern where applications respond to events asynchronously. This allows for greater scalability and responsiveness compared to traditional request-response models. Events can be triggered by various sources, such as user actions, system changes, or external data feeds.&lt;/p&gt;

&lt;p&gt;Inside event-driven architectures EDA is widely used in various domains, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time data processing: Processing financial market data, IoT sensor data, and other time-sensitive information.&lt;/li&gt;
&lt;li&gt;Microservices architecture: Decoupling services, facilitating asynchronous communication, and enabling independent scaling.&lt;/li&gt;
&lt;li&gt;Serverless computing: Executing functions in response to events, such as file uploads, database changes, or API calls.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tech-stack
&lt;/h2&gt;

&lt;p&gt;At the core of Wipe lies a robust technological stack designed to deliver real-time updates and insightful summaries. Let's break down the key components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fluvio: As a high-performance streaming engine, Fluvio efficiently handles the continuous flow of data, ensuring that news articles are processed and delivered promptly. Its Rust-based ## architecture guarantees low latency and security.&lt;/li&gt;
&lt;li&gt;Redis: This in-memory data store acts as a central hub, storing and retrieving data seamlessly between the publisher and consumer components.&lt;/li&gt;
&lt;li&gt;Langchain: By providing a vast array of Large Language Models (LLMs), Langchain empowers Wipe to understand and summarize complex articles with exceptional accuracy.&lt;/li&gt;
&lt;li&gt;Tavily Search: This AI-integrated search engine plays a crucial role in identifying relevant news articles, ensuring that Wipe delivers only the most pertinent information to its users.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these components form a powerful synergy that enables Wipe to provide users with timely, accurate, and informative AI news updates.&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting Started
&lt;/h1&gt;

&lt;p&gt;Wipe relies heavily on Docker; therefore, you should utilize Docker to get the best result. First and foremost, let’s clone the repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone &amp;lt;https://github.com/MinLee0210/Wipe.git&amp;gt;
cd ./Wipe
pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Environment Setup
&lt;/h2&gt;

&lt;p&gt;To setup the environment for the project, you must to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the Fluvio (view my previous article or on Fluvio’s website).&lt;/li&gt;
&lt;li&gt;Install Redis (Wipe used Redis on Docker, I’ll leave the link here).&lt;/li&gt;
&lt;li&gt;Get API keys from Tavily (a must) and a LLM’s provider that you want to use (Gemini, Groq, OpenAI), remember to change the configuration from config.yaml file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make sure to create a .env file that follows this structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TAVILY_API_KEY=""
GEMINI_API_KEY=""
GROQ_API_KEY=""
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Experiment Setup
&lt;/h2&gt;

&lt;p&gt;I can not show everything of the code in this blog; however, I will show 3 most important components of the app and the logic of the news_features .&lt;/p&gt;

&lt;p&gt;The WipeProducer.&lt;/p&gt;

&lt;p&gt;The WipeConsumer.&lt;/p&gt;

&lt;p&gt;The WipeDB.&lt;/p&gt;

&lt;p&gt;Before deep dive into those 3 components, I will show the get_latest_trends()function that allows getting the latest trends in the AI field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;TREND = "What is the latest trend in AI 2024?"

def get_latest_trend() -&amp;gt; tuple[list[Article], list[Event]]:
    """
    Retrieves the latest trend in AI and returns a list of summarized articles.

    Returns:
        list[Article]: A list of Article objects containing summaries of relevant news.
    """

    # Find relevant URLs
    urls = [result["url"] for result in searcher.run(TREND)["results"]]

    # Filter out unsupported URLs and scrape content
    docs = []
    for url in urls:
        try:
            docs.append(scraper.run(url))
        except ValueError:
            continue  # Skip unsupported URLs


    # Process and summarize articles
    articles, events = [], []
    for idx, doc in enumerate(docs):
        metadata = doc[0].metadata
        content = clean(doc[0].page_content)

        summary_prompt = SUMMARY_ARTICLE.format(article=content)
        summary = llm.invoke(summary_prompt).content  

        # Create Article object with metadata and summary
        article = Article(summary=summary, **metadata)
        articles.append(article)
        # Create Event object with Article's information
        event_title = f"Latest Trend in AI (2024) - {article.title}"  # Modify title creation if needed
        event = Event(title=event_title, 
                    article_id=article.id)
        events.append(event)

    return (articles, events)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The TREND is defined to be strict and related to our topic ai-trends. However, it can be extended further into any topics that you want it to be. The flow of the algorithm behaves as follows:&lt;/p&gt;

&lt;p&gt;It gets trends from a bunch of websites that are relevant based on Tavily Search Engine.&lt;/p&gt;

&lt;p&gt;Those websites are then scrapped via LangChain’s WebBaseLoader and summarized via an LLM (I used Gemini; additionally, you can use another library to simplify this stage, such as ScrapeGraphAI).&lt;/p&gt;

&lt;p&gt;The processed documents are then fed into 2 objects: Event and Article. The former is sent to the Consumer to notify them there are new trends that are gathered successfully; the latter is saved in the database, and based on the Consumer’s choice, the Article is then read by getting it from the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  WipeProducer
&lt;/h3&gt;

&lt;p&gt;Here is the code of the Producer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
"""
A simple Fluvio producer that produces records to a topic.
"""
import subprocess

from fluvio import Fluvio

class WipeProducer:
    """
    A class to produce records to a Fluvio topic.

    Attributes:
    ----------
    topic_name : str
        The name of the topic to produce to.
    partition : int
        The partition to produce to.
    producer : Fluvio.topic_producer
        The Fluvio producer object.

    Methods:
    -------
    produce_records(num_records)
        Produces a specified number of records to the topic.
    flush()
        Flushes the producer to ensure all records are sent.
    """
    ROLE = "producer"

    def __init__(self, topic_name: str, partition: int):
        """
        Initializes the FluvioProducer object.

        Parameters:
        ----------
        topic_name : str
            The name of the topic to produce to.
        partition : int
            The partition to produce to.
        """
        self.topic_name = topic_name
        self.partition = partition
        self.producer = Fluvio.connect().topic_producer(topic_name)

    def produce_records(self, event: str) -&amp;gt; None:
        """
        Produces a specified event.

        Parameters:
        ----------
        event : str
            The information of the event
        """
        try:
                self.producer.send_string(event)

        except Exception as e:
            print(f"Error producing records: {e}")

    def flush(self) -&amp;gt; None:
        """
        Flushes the producer to ensure all records are sent.
        """
        try:
            self.producer.flush()
            print("Producer flushed successfully")
        except Exception as e:
            print(f"Error flushing producer: {e}")

    def __create_topic(self, topic_name:str):
        """
        Create a topic. 

        Parameters: 
        ----------
        topic_name: str
            The name of the topic
        """
        try:
            shell_cmd = ['fluvio', 'topic', 'create', topic_name]
            subprocess.run(shell_cmd, check=True)
        except subprocess.CalledProcessError as e:
            print(f'Command {e.cmd} failed with error {e.returncode}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Producer object has 3 main methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to the chosen topic via the constructor.&lt;/li&gt;
&lt;li&gt;Send records to the Consumer.&lt;/li&gt;
&lt;li&gt;Flushes the producer to ensure all records are sent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also provide a method for Python code to execute shell commands, allowing creating topics via the Producer interface.&lt;/p&gt;

&lt;p&gt;The logic of the Producer is defined as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;producer = WipeProducer(topic_name=config["pubsub"]["topic"],
                            partition=config["pubsub"]["partition"])

# ===== PRODUCER'S METHODS =====
def pub_produce_articles():
    """
    Publishes summarized articles to the defined topic.
    """
    trends = get_latest_trend() #   (articles, events)
    for article, event in zip(trends[0], trends[1]):

        event_str = json_to_str(event.json())
        producer.produce_records(event_str)  # Serialize event to JSON

        article_str = json_to_str(article.json())
        wipe_db.set_article(id=article.id, 
                            role=producer.ROLE, 
                            content=article_str)
    producer.flush()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  WipeConsumer
&lt;/h3&gt;

&lt;p&gt;The implementation of the WipeConsumer is as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
"""
A simple Fluvio consumer that consumes records from a topic.
"""

from datetime import datetime
from fluvio import Fluvio, Offset

class WIPEConsumer:
    """
    A class to consume records from a Fluvio topic.

    Attributes:
    ----------
    role : str
        The role of the consumer (in this case, 'customer').
    topic_name : str
        The name of the topic to consume from.
    partition : int
        The partition to consume from.
    consumer : Fluvio.partition_consumer
        The Fluvio consumer object.

    Methods:
    -------
    consume_records(num_records)
        Consumes a specified number of records from the topic.
    """

    ROLE = 'customer'

    def __init__(self, topic_name: str, partition: int):
        """
        Initializes the WIPEConsumer object.

        Parameters:
        ----------
        topic_name : str
            The name of the topic to consume from.
        partition : int
            The partition to consume from.
        """
        self.topic_name = topic_name
        self.partition = partition
        self.consumer = Fluvio.connect().partition_consumer(topic_name, partition)
        self.notification = []

    def consume_records(self, num_records: int) -&amp;gt; None:
        """
        Consumes a specified number of records from the topic.

        Parameters:
        ----------
        num_records : int
            The number of records to consume.
        """
        try:
            for idx, record in enumerate(self.consumer.stream(Offset.from_end(num_records))):
                print(f"Record {idx+1}: {record.value_string()}: timestamp: {datetime.now()}")
                self.notification.append(record.value_string())
                if idx &amp;gt;= num_records - 1:
                    break
        except Exception as e:
            print(f"Error consuming records: {e}")

    def flush(self): 
        """
        Delete the notification
        """
        self.notification = []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Consumer object has 3 main methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a connection to the Producer based on the chosen topic via the constructor.&lt;/li&gt;
&lt;li&gt;Consume the records: Wipe sets the Consumer to consume the 1-latest article from the Producer.&lt;/li&gt;
&lt;li&gt;Delete notification: Everytime the Producer creates an article, it will notify its consumers. The Consumer will then store it in the &lt;code&gt;self.notification&lt;/code&gt;; based on the Consumer’s choice, the article is then read by getting it from the database.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The logic of the Consumer is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ===== CONSUMER'S METHODS =====
def sub_catch_articles():
    """
    Consumes events from the topic and processes them (implementation pending).
    """
    logger.info("[CONSUMER]: Catch events from Producer")
    consumer.consume_records(config["pubsub"]["num_records_consume"])

def sub_read_articles(): 
    """
    Retrieve articles from the database based on the events.
    Note: 
        Assume the Consumer chose the 1-latest event. 
    """

    logger.info("[CONSUMER]: Get event")
    event = consumer.notification[-1]

    while not isinstance(event, dict):
        event = str_to_json(event)

    logger.info("[CONSUMER]: Get article")
    article_id = event['article_id']
    article = wipe_db.get_article(id=article_id, 
                                  role=consumer.ROLE)

    logger.info(f"[CONSUMER]: Reading\n{article}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  WipeDB
&lt;/h3&gt;

&lt;p&gt;For the sake of simplicity, the implementation of the database is quite simple: get and set articles.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
"""
Define database logic for WIPE.
"""

import redis

AUTHORIZED_METHODS = {
    'get': ['customer', 'producer'], 
    'set': 'producer'
}

class WIPEDB(object):
    """
    A class to handle database operations for WIPE.

    Attributes:
    ----------
    server : redis.Redis
        The Redis database connection.

    Methods:
    -------
    get_article(id, role)
        Retrieves an article from the database.
    set_article(id, role)
        Sets an article in the database.
    """

    def __init__(self, db_config: dict):
        """
        Initializes the WIPEDB object.

        Parameters:
        ----------
        db_config : dict
            A dictionary containing the Redis database configuration.
        """
        self.server = self.__set_db_connection(db_config)

    def get_article(self, id: str, role: str) -&amp;gt; str:
        """
        Retrieves an article from the database.

        Parameters:
        ----------
        id : str
            The ID of the article to retrieve.
        role : str
            The role of the user requesting the article.

        Returns:
        -------
        str
            The article content if the user is authorized, otherwise None.
        """
        if role not in AUTHORIZED_METHODS['get']:
            return None
        try:
            return self.server.get(id)
        except redis.exceptions.RedisError as e:
            raise e

    def set_article(self, id: str, role: str, content: str) -&amp;gt; bool:
        """
        Sets an article in the database.

        Parameters:
        ----------
        id : str
            The ID of the article to set.
        role : str
            The role of the user setting the article.
        content : str
            The content of the article.

        Returns:
        -------
        bool
            True if the article was set successfully, otherwise False.
        """
        if role != AUTHORIZED_METHODS['set']:
            return False
        try:
            self.server.set(id, content)
            return True
        except redis.exceptions.RedisError as e:
            raise e

    def __set_db_connection(self, db_config: dict) -&amp;gt; redis.Redis:
        """
        Establishes a connection to the Redis database.

        Parameters:
        ----------
        db_config : dict
            A dictionary containing the Redis database configuration.

        Returns:
        -------
        redis.Redis
            The Redis database connection.
        """
        try:
            server = redis.Redis(**db_config)
            return server
        except redis.ConnectionError as r_ce:
            raise r_ce
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make the app more fun, Wipe constructs the app to have the authorization for interacting with the database. For instance, the Consumer is restricted to make a set method to the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code in Action
&lt;/h2&gt;

&lt;p&gt;From the producer, the code be like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./producer.py

import time

from controller.pubsub import pub_produce_articles


while True: 
    pub_produce_articles()  # Avg of 35 secs per call.
    time.sleep(10)
For every 10 seconds, the Producer will look up the Internet for the latest AI trends.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Consumer will also get the notification after every 10 seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./consumer.py

import time
import random
from controller.pubsub import sub_catch_articles, sub_read_articles

while True: 
    sub_catch_articles()
    time.sleep(10)

    rand_idx = random.randint(a=0, b=10)
    if rand_idx % 2 == 0: 
        sub_read_articles()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a more engaging experience, I incorporated a random element into the notification system. If the random outcome met specific conditions, the Consumer would be shown the article.&lt;/p&gt;

&lt;p&gt;To run the app, simply type&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python producer.py &amp;amp;
python cosumer.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Result
&lt;/h1&gt;

&lt;p&gt;The Producer automatically gets the latest trends from the Internet and uses AI to summarize the website every 10 seconds. After the summarization is done, it makes an event to notify its customers.&lt;/p&gt;

&lt;p&gt;The Customer retrieves a notification from its producer. In this experiment, I set it to randomly choose whether to "read" the news from the notification or not.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;In this article, I have successfully extended the capabilities of the event-driven architecture (EDA) application introduced in the previous installment.&lt;/p&gt;

&lt;p&gt;By integrating a search engine and utilizing Large Language Models (LLMs), the application, now named Wipe, has become a more comprehensive and informative tool. The ability to discover relevant quotes and generate concise summaries enhances the user experience and provides valuable insights into the vast world of AI.&lt;/p&gt;

&lt;p&gt;The successful implementation of these features demonstrates the versatility and power of EDA in creating robust and scalable applications.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>pubsub</category>
      <category>challenge</category>
      <category>webdev</category>
    </item>
    <item>
      <title>How to build an event-driven architecture with Fluvio</title>
      <dc:creator>Minh Le Duc</dc:creator>
      <pubDate>Wed, 28 Aug 2024 03:29:20 +0000</pubDate>
      <link>https://dev.to/minh_leduc/how-to-build-an-event-driven-architecture-with-fluvio-3enh</link>
      <guid>https://dev.to/minh_leduc/how-to-build-an-event-driven-architecture-with-fluvio-3enh</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Get started on a journey into the world of event-driven architecture with Fluvio. This powerful platform offers a streamlined approach to building real-time, scalable, and resilient applications. By leveraging Fluvio's capabilities, you can unlock the full potential of event-driven design and create innovative solutions that meet the demands of today's dynamic environments.&lt;/p&gt;

&lt;p&gt;In this guide, we'll delve into the intricacies of Fluvio, exploring its key features, benefits, and practical implementation strategies. You'll learn how to utilize the power of event-driven architecture to build applications that are responsive, scalable, and efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some information
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Event-driven architecture
&lt;/h3&gt;

&lt;p&gt;Imagine you're hosting a party. You want to notify everyone when the pizza arrives. Instead of shouting to each guest individually, you could simply announce it once, and everyone who's interested in pizza will hear and react accordingly.&lt;/p&gt;

&lt;p&gt;This is essentially the concept of event-driven architecture. It's a design pattern where components of a system communicate by producing and consuming events. Think of it as a way to create a more dynamic and responsive system, similar to how your party guests react to your announcement.&lt;/p&gt;

&lt;p&gt;Now, let's introduce Pub/Sub.&lt;/p&gt;

&lt;p&gt;Imagine you're the party host (the publisher). When the pizza arrives, you publish an event called "Pizza Is Here.". Your guests (the subscribers) can subscribe to this event. When they hear your announcement (the event), they'll take action (e.g., grab a slice).&lt;/p&gt;

&lt;p&gt;In a pub/sub system, the publisher sends out events, and subscribers can choose to listen to specific events. This decouples the components, making the system more scalable, flexible, and resilient.&lt;/p&gt;

&lt;p&gt;Here's a more technical breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publisher: Produces events and sends them to a message broker.&lt;/li&gt;
&lt;li&gt;Message Broker: Stores and distributes events to interested subscribers.&lt;/li&gt;
&lt;li&gt;Subscriber: Consumes events and takes appropriate actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Imagine a social media platform. When a user posts a new message, that's an event. Other users who follow that user can subscribe to their posts and receive notifications whenever a new message is published.&lt;/p&gt;

&lt;p&gt;Key benefits of Pub/Sub:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scalability: handles large volumes of events efficiently.&lt;/li&gt;
&lt;li&gt;Flexibility: Allows for dynamic subscriptions and decoupled components.&lt;/li&gt;
&lt;li&gt;Resilience: Ensures messages are delivered even if components fail.&lt;/li&gt;
&lt;li&gt;Real-time updates: Enables real-time communication and updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Note: I found an interesting video that can help you easily understand the concept; here is the link.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Fluvio
&lt;/h3&gt;

&lt;p&gt;Fluvio's exceptional performance and efficiency make it a standout choice for real-time data processing. Its low-latency capabilities ensure that data is processed swiftly, enabling applications to respond to events in a timely manner. Furthermore, Fluvio's lightweight design and optimized architecture minimize resource consumption, making it suitable for even the most resource-constrained environments.&lt;/p&gt;

&lt;p&gt;Fluvio's rich API support and customizable stream processing capabilities make it a developer's dream. With client libraries available for popular programming languages, you can easily integrate Fluvio into your existing applications. The platform's programmability allows you to tailor data processing pipelines to meet your specific requirements, ensuring maximum flexibility and control.&lt;/p&gt;

&lt;p&gt;Moreover, Fluvio's WebAssembly integration enables you to securely execute custom stream processing logic, providing a powerful and efficient way to extend the platform's capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code in Action
&lt;/h2&gt;

&lt;p&gt;Please read the article via this &lt;a href="https://minhleduc.substack.com/p/how-to-build-an-event-driven-architecture" rel="noopener noreferrer"&gt;website&lt;/a&gt; for detailed implementation and better visualizations. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we talked about one of the greatest architecture in programming: Pub/Sub, a fundamental component of event-driven architecture. It provides a robust and scalable foundation for event-driven architectures, enabling loosely coupled, asynchronous communication between components. In addition, we used Fluvio to demonstrate the architecture by allowing the publisher to generate quote every 7 seconds to the Consumer. Clearly, this framework provides us an easy approach to event-driven architecture.&lt;/p&gt;

&lt;p&gt;If you guys want me to continue this approach in LLMs applications or develop it further,. You guys can comment to let me know!&lt;/p&gt;




&lt;p&gt;Thank you for reading this article; I hope it added something to your knowledge bank! Just before you leave:&lt;/p&gt;

&lt;p&gt;👉 Be sure to press the like button and follow me. It would be a great motivation for me.&lt;/p&gt;

&lt;p&gt;👉 More details of the code refer to: &lt;a href="https://github.com/8Opt/Fluvio-Quote" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Follow me: &lt;a href="//www.linkedin.com/in/minhle007"&gt;LinkedIn&lt;/a&gt; | &lt;a href="https://github.com/MinLee0210" rel="noopener noreferrer"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>eventdriven</category>
      <category>python</category>
      <category>pubsub</category>
    </item>
    <item>
      <title>Boost Your RAG Performance with Tavily Search API</title>
      <dc:creator>Minh Le Duc</dc:creator>
      <pubDate>Wed, 31 Jul 2024 09:54:12 +0000</pubDate>
      <link>https://dev.to/minh_leduc/boost-your-rag-performance-with-tavily-search-api-211b</link>
      <guid>https://dev.to/minh_leduc/boost-your-rag-performance-with-tavily-search-api-211b</guid>
      <description>&lt;p&gt;LLMs and RAG systems have shown to be advantageous over time. They not only provide engaging discussions that deliver helpful information, but they also open up new avenues for tailored and intelligent applications, transforming areas ranging from customer service to scientific research. Despite their unique and powerful skills, there is evidence that they can produce plausible-sounding but inaccurate information, particularly when confronted with unclear questions or a lack of relevant data. Furthermore, they have demonstrated a lack of knowledge updates, causing them to occasionally present "old" information.&lt;/p&gt;

&lt;p&gt;To mitigate those issues, the ability to connect to reliable and up-to-date resources is essential. Using an additional tool to retrieve external knowledge can help RAG and LLMs access up-to-date information, mitigating hallucinations and enhancing factual accuracy.&lt;/p&gt;

&lt;p&gt;The Tavily Search API is suitable for that job. It is a search engine designed specifically for LLMs and RAG, with the goal of providing efficient, rapid, and permanent search results. Tavily specializes in improving search results for AI developers and autonomous AI agents. Furthermore, Tavily uses private financial, coding, news, and other internal data sources to supplement web content. As a result, Tavily empowers developers to build more accurate, insightful, and contextually aware AI applications.&lt;/p&gt;

&lt;p&gt;We will talk about the Tavily Search API, diving into its functionalities and how it leverages AI for enhanced search. The structure of this writing is as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understanding the Power of Tavily Search API: A quick overview of the Tavily Search API, including why it is important and how it works.&lt;/li&gt;
&lt;li&gt;Code in Action: Start with a basic code example showcasing a simple search query using Tavily.&lt;/li&gt;
&lt;li&gt;Conclusion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The link of the writing left below 👇👇👇&lt;/p&gt;

</description>
      <category>ai</category>
      <category>nlp</category>
      <category>api</category>
      <category>python</category>
    </item>
  </channel>
</rss>
