<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kirill Balakhonov</title>
    <description>The latest articles on DEV Community by Kirill Balakhonov (@balakhonoff).</description>
    <link>https://dev.to/balakhonoff</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/balakhonoff"/>
    <language>en</language>
    <item>
      <title>How to Create Your Own AnythingGPT — a Bot That Answers the Way You Want it to</title>
      <dc:creator>Kirill Balakhonov</dc:creator>
      <pubDate>Wed, 26 Jul 2023 18:40:47 +0000</pubDate>
      <link>https://dev.to/balakhonoff/how-to-create-your-own-anythinggpt-a-bot-that-answers-the-way-you-want-it-to-eg</link>
      <guid>https://dev.to/balakhonoff/how-to-create-your-own-anythinggpt-a-bot-that-answers-the-way-you-want-it-to-eg</guid>
      <description>&lt;p&gt;Hello everyone! Recently, I applied an interesting solution during my practice that I've wanted to try for a long time, and now I'm ready to explain how you can create something similar for any other task. We will be talking about creating a customized version of ChatGPT that answers questions, taking into account a &lt;strong&gt;large&lt;/strong&gt; knowledge base that is &lt;strong&gt;not limited in length&lt;/strong&gt; by the size of the prompt (meaning you wouldn't be able to simply add all the information before each question to ChatGPT).&lt;/p&gt;

&lt;p&gt;To achieve this, we will use contextual embeddings from OpenAI (for a truly high-quality search of relevant questions from the knowledge base) and the ChatGPT API itself (to format the answers in natural human language).&lt;/p&gt;

&lt;p&gt;Additionally, it is assumed that the assistant can answer &lt;strong&gt;not only the explicitly stated Q&amp;amp;A questions&lt;/strong&gt;, but also questions that a person familiar with the Q&amp;amp;A could answer. If you're interested in learning how to create simple bots that respond using a large knowledge base, welcome to the details.&lt;/p&gt;

&lt;p&gt;I would like to point out that there are some library projects that try to solve this task in the form of a framework, for example, &lt;a href="https://python.langchain.com/docs/get_started/introduction.html" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;, and I also tried using it. However, like any framework that is at an early stage of development, in some cases, it tends to limit rather than simplify things. In particular, from the very beginning of solving this task, I understood what I wanted to do with the data and knew how to do it by myself (including context-based search, setting the correct context in prompts, combining sources of information).&lt;/p&gt;

&lt;p&gt;But I couldn't configure the framework to do exactly that with an acceptable level of quality, and debugging the framework seemed like overkill for this task. In the end, I created my own boilerplate code and was satisfied with this approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task
&lt;/h2&gt;

&lt;p&gt;Let me briefly describe the task I was working on, and &lt;em&gt;you can use the same code in your own tasks, replacing the data sources and prompts with the ones that suit you&lt;/em&gt;. You will still have full control over the bot's logic.&lt;/p&gt;

&lt;p&gt;When writing code, I often use ChatGPT (and I'm not ashamed of it🙂). However, due to the lack of data for 2022+ year, there are sometimes problems with relatively new technologies.&lt;/p&gt;

&lt;p&gt;In particular, when developing subgraphs for The Graph protocol (the most popular way to &lt;a href="https://thegraph.com/" rel="noopener noreferrer"&gt;build&lt;/a&gt; ETL for retrieving indexed data from EVM-compatible blockchains, you can read more about it in my previous articles [&lt;a href="https://hackernoon.com/web3-indexing-the-ultimate-guide-no-prior-knowledge-required" rel="noopener noreferrer"&gt;1&lt;/a&gt;] and [&lt;a href="https://hackernoon.com/accessing-real-time-smart-contract-data-from-python-code-using-lido-contract-as-an-example" rel="noopener noreferrer"&gt;2&lt;/a&gt;]), the libraries themselves have undergone several breaking compatibility changes. The "old" answers from ChatGPT are no longer helpful, and I have to search for the correct answers either in the scarce documentation or, worst case, in the developers' Discord, which is not very convenient (it's not like StackOverflow).&lt;/p&gt;

&lt;p&gt;The second part of the problem is that every time you need to provide the conversation context correctly because ChatGPT often veers off the topic of subgraphs, jumping to GraphQL, SQL, or higher mathematics (“The Graph”, “subgraphs”, etc. are not unique terms and have many different interpretations and topics).&lt;/p&gt;

&lt;p&gt;Therefore, after a short period of struggling with ChatGPT to correct errors in subgraph code, I decided to create my own &lt;a href="https://t.me/SubgraphGPT_bot" rel="noopener noreferrer"&gt;SubgraphGPT&lt;/a&gt; bot, which will always be in the right context and try to answer, taking into account the knowledge base and messages from developers discord.&lt;/p&gt;

&lt;p&gt;PS. I work as a lead product manager at &lt;a href="https://chainstack.com/" rel="noopener noreferrer"&gt;chainstack.com&lt;/a&gt;, a Web3 infrastructure provider, and I am responsible for the development of the &lt;a href="https://chainstack.com/subgraphs" rel="noopener noreferrer"&gt;subgraph hosting&lt;/a&gt; service. So I have to work with subgraphs quite a lot, helping users understand this relatively new technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top-level solution
&lt;/h2&gt;

&lt;p&gt;In the end, to solve this problem, I decided to use two sources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A manually compiled knowledge base of questions and answers, selected in semi-blind mode (often I took the topic title from the documentation as the question, and the entire paragraph of information as the answer).&lt;/li&gt;
&lt;li&gt;Exported messages from the protocol developers Discord from the past 2 years (to cover the missing period from the end of 2021).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, different approaches were used for each source to compose a request to the ChatGPT API, specifically:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For the manually compiled Q&amp;amp;A,&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;for each question, a contextual embedding is generated (a vector describing this question in a multidimensional state), obtained through the text-embedding-ada-002 model,&lt;/li&gt;
&lt;li&gt;then, using a cosine distance search function, the top 3 most similar questions from the knowledge base are found (instead of 3, the most suitable number for your dataset can be used),&lt;/li&gt;
&lt;li&gt;the answers to these 3 questions are added to the final prompt with an approximate description of "Use this Q&amp;amp;A snippet only if it is relevant to the given question." &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;For the messages exported from Discord, the following algorithm was used:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;for each message containing a question mark, a contextual embedding is also generated (using the same model),&lt;/li&gt;
&lt;li&gt;then, in a similar way, the top 5 most similar questions are selected,&lt;/li&gt;
&lt;li&gt;and as context for the answer, the 20 messages following that question are added, which are assumed to have a certain probability of containing the answer to the question,&lt;/li&gt;
&lt;li&gt;and this information was added to the final prompt approximately like this: "If you did not find an explicit answer to the question in the attached Q&amp;amp;A snippet, the following chat fragments by the developer may be useful to you for answering the original question ..."&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Furthermore, if the topic is not explicitly given, the presence of Q&amp;amp;A snippets and chats can lead to ambiguity in the answers, which may look, for example, as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs8gir5aurkd64go2zzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs8gir5aurkd64go2zzy.png" alt="''"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, it understands that the question was detached from the context and the answer was also accepted detached from the context. Then it was told that such data can be used, and it summarizes it as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Actually, the answer can be like this...&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;And if we consider the context, then it will be like this...&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;To avoid this, we introduce the concept of a topic, which is explicitly defined and inserted at the beginning of the prompt as:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I need to get an answer to a question related to the topic 'The Graph subgraph development': what is a subgraph?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Furthermore, in the last sentence, I also add this:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Finally, only if the above information is not sufficient, you can use your knowledge in the topic 'The Graph subgraph development' to answer the question.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;In the end, the complete prompt (excluding the part obtained from chats) looks as follows:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;==I need to get an answer to the question related to the topic of "The Graph subgraph development": what is a subgraph?.==

==Possibly, you might find an answer in these Q&amp;amp;As \[use the information only if it is actually relevant and useful for the question answering\]:==

==Q: &amp;lt;What is a subgraph?&amp;gt;== 
==A: &amp;lt;A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using the Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available to be queried by subgraph consumers.&amp;gt;==

==Q: &amp;lt;Am I still able to create a subgraph if my smart contracts don't have events?&amp;gt;== 
==A: &amp;lt;It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are by far the fastest way to retrieve useful data. If the contracts you are working with do not contain events, your subgraph can use call and block handlers to trigger indexing. Although this is not recommended, as performance will be significantly slower.&amp;gt;==

==Q: &amp;lt;How do I call a contract function or access a public state variable from my subgraph mappings?&amp;gt;== 
==A: &amp;lt;Take a look at Access to smart contract state inside the section AssemblyScript API. https://thegraph.com/docs/en/developing/assemblyscript-api/&amp;gt;==

==Finally, only if the information above was not enough you can use your knowledge in the topic of "The Graph subgraph development" to answer the question.==
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;The response to the above request with this semi-auto-generated prompt at the input looks correct from the beginning:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9x8gzvlof87jxs691qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9x8gzvlof87jxs691qd.png" alt="''"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, the bot immediately responds with the correct key and adds more relevant information, so the answer doesn't look as straightforward as in Q&amp;amp;A (I remind you that this question is exactly in the list of questions and answers), but with reasonable explanations that partly address the following questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Source code
&lt;/h2&gt;

&lt;p&gt;I should note right away that &lt;em&gt;there will be a link to the repository at the end&lt;/em&gt;, so you can run the bot as is, replacing "topic" with your own, the Q&amp;amp;A knowledge base file with your own, and providing your own API keys for OpenAI and the Telegram bot. So the description here is not intended to fully correspond to the source code on GitHub, but rather to highlight the main aspects of the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  1 - preparing the virtual environment
&lt;/h2&gt;

&lt;p&gt;Let's create a new virtual environment and install the dependencies from requirements.txt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;virtualenv -p python3.8 .venv
source .venv/bin/activate
pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2 - Knowledge Base, collected manually
&lt;/h2&gt;

&lt;p&gt;As mentioned above, it is assumed that there is a list of questions and answers, in this case in the format of an Excel file of the following type:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtd9nigcia0pkhjilggd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvtd9nigcia0pkhjilggd.png" alt="''"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to find the most similar question to the given one, we need to add an embedding of the question (a multidimensional vector in state space) to each line of this file. We will use the &lt;strong&gt;add_embeddings.py&lt;/strong&gt; file for this. The script consists of several simple parts.&lt;/p&gt;

&lt;p&gt;Importing libraries and reading command line arguments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd
import openai
import argparse


# Create an Argument Parser object
parser = argparse.ArgumentParser(description='Adding embeddings for each line of csv file')

# Add the arguments
parser.add_argument('--openai_api_key', type=str, help='API KEY of OpenAI API to create contextual embeddings for each line')
parser.add_argument('--file', type=str, help='A source CSV file with the text data')
parser.add_argument('--colname', type=str, help='Column name with the texts')

# Parse the command-line arguments
args = parser.parse_args()

# Access the argument values
openai.api_key = args.openai_api_key
file = args.file
colname = args.colname
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, reading the file into a pandas dataframe and filtering the questions based on the presence of a question mark. This code snippet is common for handling a knowledge base as well as raw message streams from Discord, so assuming questions are often duplicated, I decided to keep such a simple method of rough non-question filtering.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if file[-4:] == '.csv':
    df = pd.read_csv(file)
else:
    df = pd.read_excel(file)

# filter NAs
df = df[~df[colname].isna()]
# Keep only questions
df = df[df[colname].str.contains('\?')]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally - a function for generating an embedding by calling the API of the model &lt;em&gt;text-embedding-ada-002&lt;/em&gt;, a couple of repeated requests since the API can occasionally be overloaded and may respond with an error, and applying this function to each row of the dataframe.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def get_embedding(text, model="text-embedding-ada-002"):
    i = 0
    max_try = 3
    # to avoid random OpenAI API fails:
    while i &amp;lt; max_try:
        try:
            text = text.replace("\n", " ")
            result = openai.Embedding.create(input=[text], model=model)['data'][0]['embedding']
            return result
        except:
            i += 1


def process_row(x):
    return get_embedding(x, model='text-embedding-ada-002')


df['ada_embedding'] = df[colname].apply(process_row)
df.to_csv(file[:-4]+'_question_embed.csv', index=False)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the end, this script can be called with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python add_embeddings.py \
  --openai_api_key="xxx" \
  --file="./subgraphs_faq.xlsx" \
  --colname="Question"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;setting up OpenAI API key, the file with the knowledge base, and the name of the column where the question text is located. The final created file, subgraphs_faq._question_embed.csv, contains columns "Question", "Answer", and "ada_embedding".&lt;/p&gt;

&lt;h2&gt;
  
  
  3 - Data collection from Discord (optional)
&lt;/h2&gt;

&lt;p&gt;If you are interested in a simple bot that responds based on manually collected knowledge base only, you can skip this and the following section. However, I will briefly provide code examples here for collecting data from both a Discord channel and a Telegram group. The file &lt;strong&gt;discord-channel-data-collection.py&lt;/strong&gt; consists of two parts. The first part includes importing libraries and initializing command line arguments:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests
import json
import pandas as pd
import argparse

# Create an Argument Parser object
parser = argparse.ArgumentParser(description='Discord Channel Data Collection Script')

# Add the arguments
parser.add_argument('--channel_id', type=str, help='Channel ID from the URL of a channel in browser https://discord.com/channels/xxx/{CHANNEL_ID}')
parser.add_argument('--authorization_key', type=str, help='Authorization Key. Being on the discord channel page, start typing anything, then open developer tools -&amp;gt; Network -&amp;gt; Find "typing" -&amp;gt; Headers -&amp;gt; Authorization.')

# Parse the command-line arguments
args = parser.parse_args()

# Access the argument values
channel_id = args.channel_id
authorization_key = args.authorization_key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second is the function for retrieving data from the channel and saving it into a pandas dataframe, as well as its call with specified parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def retrieve_messages(channel_id, authorization_key):
    num = 0
    limit = 100

    headers = {
        'authorization': authorization_key
    }

    last_message_id = None

    # Create a pandas DataFrame
    df = pd.DataFrame(columns=['id', 'dt', 'text', 'author_id', 'author_username', 'is_bot', 'is_reply', 'id_reply'])

    while True:
        query_parameters = f'limit={limit}'
        if last_message_id is not None:
            query_parameters += f'&amp;amp;before={last_message_id}'

        r = requests.get(
            f'https://discord.com/api/v9/channels/{channel_id}/messages?{query_parameters}', headers=headers
        )
        jsonn = json.loads(r.text)
        if len(jsonn) == 0:
            break

        for value in jsonn:
            is_reply = False
            id_reply = '0'
            if 'message_reference' in value and value['message_reference'] is not None:
                if 'message_id' in value['message_reference'].keys():
                    is_reply = True
                    id_reply = value['message_reference']['message_id']

            text = value['content']
            if 'embeds' in value.keys():
                if len(value['embeds'])&amp;gt;0:
                    for x in value['embeds']:
                        if 'description' in x.keys():
                            if text != '':
                                text += ' ' + x['description']
                            else:
                                text = x['description']
            df_t = pd.DataFrame({
                'id': value['id'],
                'dt': value['timestamp'],
                'text': text,
                'author_id': value['author']['id'],
                'author_username': value['author']['username'],
                'is_bot': value['author']['bot'] if 'bot' in value['author'].keys() else False,
                'is_reply': is_reply,
                'id_reply': id_reply,
            }, index=[0])
            if len(df) == 0:
                df = df_t.copy()
            else:
                df = pd.concat([df, df_t], ignore_index=True)

            last_message_id = value['id']
            num = num + 1

        print('number of messages we collected is', num)


        # Save DataFrame to a CSV file
        df.to_csv(f'../discord_messages_{channel_id}.csv', index=False)


if __name__ == '__main__':
    retrieve_messages(channel_id, authorization_key)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the useful information here, there is a detail that I can't find every time I need it - obtaining an authorization key. Considering the &lt;strong&gt;channel_id&lt;/strong&gt; can be obtained from the URL of the Discord channel opened in the browser (the last long number in the link), the &lt;strong&gt;authorization_key&lt;/strong&gt; can only be found by starting to type a message in the channel, then using developer tools to find the event named "typing" in the Network section and extract the parameter from the header.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fky1w7et4f04m5pz4xbif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fky1w7et4f04m5pz4xbif.png" alt="''"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After receiving these parameters, you can run the following command to collect all messages from the channel (substitute your own values):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python discord-channel-data-collection.py \
  --channel_id=123456 \
  --authorization_key="123456qwerty"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4 - Collecting data from Telegram (optional)
&lt;/h2&gt;

&lt;p&gt;Since I often download various data from chats/channels in Telegram, I also decided to provide code for this, which generates a similar format (compatible in terms of the &lt;strong&gt;add_embeddings.py&lt;/strong&gt; script) CSV file. So, the &lt;strong&gt;telegram-group-data-collection.py&lt;/strong&gt; script looks as follows. Importing libraries and initializing arguments from the command line:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pandas as pd
import argparse
from telethon import TelegramClient

# Create an Argument Parser object
parser = argparse.ArgumentParser(description='Telegram Group Data Collection Script')

# Add the arguments
parser.add_argument('--app_id', type=int, help='Telegram APP id from https://my.telegram.org/apps')
parser.add_argument('--app_hash', type=str, help='Telegram APP hash from https://my.telegram.org/apps')
parser.add_argument('--phone_number', type=str, help='Telegram user phone number with the leading "+"')
parser.add_argument('--password', type=str, help='Telegram user password')
parser.add_argument('--group_name', type=str, help='Telegram group public name without "@"')
parser.add_argument('--limit_messages', type=int, help='Number of last messages to download')

# Parse the command-line arguments
args = parser.parse_args()

# Access the argument values
app_id = args.app_id
app_hash = args.app_hash
phone_number = args.phone_number
password = args.password
group_name = args.group_name
limit_messages = args.limit_messages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, you cannot simply download all the messages from the chat without authorizing yourself as the first person. In other words, besides creating an app through &lt;a href="https://my.telegram.org/apps" rel="noopener noreferrer"&gt;https://my.telegram.org/apps&lt;/a&gt; (obtaining APP_ID and APP_HASH), you will also need to use your phone number and password to create an instance of the TelegramClient class from the Telethon library.&lt;/p&gt;

&lt;p&gt;Additionally, you will need the public group_name of the Telegram chat and explicitly specify the number of latest messages to be retrieved. Overall, I have done this procedure many times with any number of exported messages without receiving any temporary or permanent bans from the Telegram API, unlike when one sends messages too frequently from one account.&lt;/p&gt;

&lt;p&gt;The second part of the script contains the actual function for exporting messages and its execution (with necessary filtering to avoid critical errors that would stop the collection halfway):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async def main():
    messages = await client.get_messages(group_name, limit=limit_messages)
    df = pd.DataFrame(columns=['date', 'user_id', 'raw_text', 'views', 'forwards', 'text', 'chan', 'id'])

    for m in messages:
        if m is not None:
            if 'from_id' in m.__dict__.keys():
                if m.from_id is not None:
                    if 'user_id' in m.from_id.__dict__.keys():
                        df = pd.concat([df, pd.DataFrame([{'date': m.date, 'user_id': m.from_id.user_id, 'raw_text': m.raw_text, 'views': m.views,
                             'forwards': m.forwards, 'text': m.text, 'chan': group_name, 'id': m.id}])], ignore_index=True)

    df = df[~df['user_id'].isna()]
    df = df[~df['text'].isna()]
    df['date'] = pd.to_datetime(df['date'])
    df = df.sort_values('date').reset_index(drop=True)

    df.to_csv(f'../telegram_messages_{group_name}.csv', index=False)

client = TelegramClient('session', app_id, app_hash)
client.start(phone=phone_number, password=password)

with client:
    client.loop.run_until_complete(main())
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the end, this script can be executed with the following command (replace the values with your own):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python telegram-group-data-collection.py \
  --app_id=123456 --app_hash="123456qwerty" \
  --phone_number="+xxxxxx" --password="qwerty123" \
  --group_name="xxx" --limit_messages=10000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5 - Telegram bot script that actually answers questions
&lt;/h2&gt;

&lt;p&gt;Most of the time, I wrap my pet projects into Telegram bots because it requires minimal effort to launch and immediately shows potential. In this case, I did the same. I must say that the bot code does not contain all the corner cases that I use in the production version of the &lt;a href="https://t.me/SubgraphGPT_bot" rel="noopener noreferrer"&gt;SubgraphGPTbot&lt;/a&gt;, as it has quite a lot of inherited logic from another pet project of mine. Instead, I left the minimum amount of basic code that should be easy to modify for your needs.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;telegram-bot.py&lt;/strong&gt; script consists of several parts. First, as before, libraries are imported and command line arguments are initialized.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import threading
import telegram
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters

import openai
from openai.embeddings_utils import cosine_similarity

import numpy as np
import pandas as pd

import argparse
import functools

# Create an Argument Parser object
parser = argparse.ArgumentParser(description='Run the bot which uses prepared knowledge base enriched with contextual embeddings')

# Add the arguments
parser.add_argument('--openai_api_key', type=str, help='API KEY of OpenAI API to create contextual embeddings for each line')
parser.add_argument('--telegram_bot_token', type=str, help='A telegram bot token obtained via @BotFather')
parser.add_argument('--file', type=str, help='A source CSV file with the questions, answers and embeddings')
parser.add_argument('--topic', type=str, help='Write the topic to add a default context for the bot')
parser.add_argument('--start_message', type=str, help="The text that will be shown to the users after they click /start button/command", default="Hello, World!")
parser.add_argument('--model', type=str, help='A model of ChatGPT which will be used', default='gpt-3.5-turbo-16k')
parser.add_argument('--num_top_qa', type=str, help="The number of top similar questions' answers as a context", default=3)

# Parse the command-line arguments
args = parser.parse_args()

# Access the argument values
openai.api_key = args.openai_api_key
token = args.telegram_bot_token
file = args.file
topic = args.topic
model = args.model
num_top_qa = args.num_top_qa
start_message = args.start_message
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note that in this case, you will also need an OpenAI API key, as in order to find the most similar question to the one just entered by the user from the knowledge base, you first need to obtain the embedding of that question by calling the API as we did for the knowledge base itself.&lt;/p&gt;

&lt;p&gt;In addition, you will need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;telegram_bot_token&lt;/strong&gt; - a token for the Telegram bot from BotFather&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;file&lt;/strong&gt; - a path to the knowledge base file (I intentionally skip the case with messages from Discord here, as I assume it is a niche task, but they can be easily integrated into the code if necessary)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;topic&lt;/strong&gt; - the textual formulation of the topic (mentioned at the beginning of the article) in which the bot will operate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;start_message&lt;/strong&gt; - the message that the user who clicked /start will see (by default, "Hello, World!")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;model&lt;/strong&gt; - the choice of model (set by default)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;num_top_qa&lt;/strong&gt; - the number of most similar questions-answers from the knowledge base that will be used as context for the ChatGPT request&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then follows the loading of the knowledge base file and the initialization of the question embeddings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# reading QA file with embeddings
df_qa = pd.read_csv(file)
df_qa['ada_embedding'] = df_qa.ada_embedding.apply(eval).apply(np.array)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make a request to the ChatGPT API, knowing that it sometimes responds with an error due to overload, I use a function with automatic request retry in case of an error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def retry_on_error(func):
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        max_retries = 3
        for i in range(max_retries):
            try:
                return func(*args, **kwargs)
            except Exception as e:
                print(f"Error occurred, retrying ({i+1}/{max_retries} attempts)...")
        # If all retries failed, raise the last exception
        raise e

    return wrapper

@retry_on_error
def call_chatgpt(*args, **kwargs):
    return openai.ChatCompletion.create(*args, **kwargs)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;According to OpenAI's recommendation, before converting the text into embeddings, new lines should be replaced with spaces.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def get_embedding(text, model="text-embedding-ada-002"):
    text = text.replace("\n", " ")
    return openai.Embedding.create(input=[text], model=model)['data'][0]['embedding']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To search for the most similar questions, we calculate the cosine distance between the embeddings of two questions, taken directly from the openai library.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def search_similar(df, question, n=3, pprint=True):
    embedding = get_embedding(question, model='text-embedding-ada-002')
    df['similarities'] = df.ada_embedding.apply(lambda x: cosine_similarity(x, embedding))
    res = df.sort_values('similarities', ascending=False).head(n)
    return res
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After receiving a list of the most similar question-answer pairs to the given one, you can compile them into one text, marking it in a way that ChatGPT can unambiguously determine what is what.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def collect_text_qa(df):
    text = ''
    for i, row in df.iterrows():
        text += f'Q: &amp;lt;'+row['Question'] + '&amp;gt;\nA: &amp;lt;'+ row['Answer'] +'&amp;gt;\n\n'
    print('len qa', len(text.split(' ')))
    return text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, it is already necessary to gather the "pieces" of the prompt described at the very beginning of the article into one whole.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def collect_full_prompt(question, qa_prompt, chat_prompt=None):
    prompt = f'I need to get an answer to the question related to the topic of "{topic}": ' + "{{{"+ question +"}}}. "
    prompt += '\n\nPossibly, you might find an answer in these Q&amp;amp;As [use the information only if it is actually relevant and useful for the question answering]: \n\n' + qa_prompt
    # edit if you need to use this also
    if chat_prompt is not None:
        prompt += "---------\nIf you didn't find a clear answer in the Q&amp;amp;As, possibly, these talks from chats might be helpful to answer properly [use the information only if it is actually relevant and useful for the question answering]: \n\n" + chat_prompt
    prompt += f'\nFinally, only if the information above was not enough you can use your knowledge in the topic of "{topic}" to answer the question.'

    return prompt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this case, I removed the part using messages from Discord, but you can still follow the logic if chat_prompt != None.&lt;/p&gt;

&lt;p&gt;In addition, we will need a function that splits the response received from the ChatGPT API into Telegram messages (no more than 4096 characters):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def telegram_message_format(text):
    max_message_length = 4096

    if len(text) &amp;gt; max_message_length:
        parts = []
        while len(text) &amp;gt; max_message_length:
            parts.append(text[:max_message_length])
            text = text[max_message_length:]
        parts.append(text)
        return parts
    else:
        return [text]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The bot starts with a typical sequence of steps, assigning two functions to be triggered by the /start command and receiving a personal message from the user:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bot = telegram.Bot(token=token)
updater = Updater(token=token, use_context=True)
dispatcher = updater.dispatcher

dispatcher.add_handler(CommandHandler("start", start, filters=Filters.chat_type.private))
dispatcher.add_handler(MessageHandler(~Filters.command &amp;amp; Filters.text, message_handler))

updater.start_polling()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code to respond to /start is straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def start(update, context):
    user = update.effective_user
    context.bot.send_message(chat_id=user.id, text=start_message)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And for responding to a free-form message, it's not quite clear.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Firstly&lt;/strong&gt;, to avoid blocking threads from different users, let's immediately "separate" them into independent processes using the &lt;em&gt;threading&lt;/em&gt; library.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def message_handler(update, context):

    thread = threading.Thread(target=long_running_task, args=(update, context))
    thread.start()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Secondly&lt;/strong&gt;, all the logic will happen inside the &lt;strong&gt;long_running_task&lt;/strong&gt; function. I intentionally wrapped the main fragments in &lt;em&gt;try/except&lt;/em&gt; to easily localize errors when modifying the bot's code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, we retrieve the message and handle the error if the user sends a file or image instead of a message.&lt;/li&gt;
&lt;li&gt;Then, we search for the most similar questions-answers using &lt;strong&gt;search_similar&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;After that, we collect all the questions-answers into one text using &lt;strong&gt;collect_text_qa&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;And we generate the final prompt for the ChatGPT API using &lt;strong&gt;collect_full_prompt&lt;/strong&gt;.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def long_running_task(update, context):
    user = update.effective_user
    context.bot.send_message(chat_id=user.id, text='🕰️⏰🕙⏱️⏳...')

    try:
        question = update.message.text.strip()
    except Exception as e:
        context.bot.send_message(chat_id=user.id,
                                 text=f"🤔It seems like you're sending not text to the bot. Currently, the bot can only work with text requests.")
        return

    try:
        qa_found = search_similar(df_qa, question, n=num_top_qa)
        qa_prompt = collect_text_qa(qa_found)
        full_prompt = collect_full_prompt(question, qa_prompt)
    except Exception as e:
        context.bot.send_message(chat_id=user.id,
                                 text=f"Search failed. Debug needed.")
        return
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since there may be errors when replacing the knowledge base and topic with your own, for example, due to formatting, a human-readable error is displayed.&lt;/p&gt;

&lt;p&gt;Next, the request is sent to the ChatGPT API with a leading system message that has already proven itself: "&lt;em&gt;You are a helpful assistant.&lt;/em&gt;" The resulting output is divided into multiple messages if necessary and sent back to the user.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;try:
        print(full_prompt)
        completion = call_chatgpt(
            model=model,
            n=1,
            messages=[{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": full_prompt}]
        )
        result = completion['choices'][0]['message']['content']
    except Exception as e:
        context.bot.send_message(chat_id=user.id,
                                 text=f'It seems like the OpenAI service is responding with errors. Try sending the request again.')
        return

    parts = telegram_message_format(result)
    for part in parts:
        update.message.reply_text(part, reply_to_message_id=update.message.message_id)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That concludes the part with the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prototype
&lt;/h2&gt;

&lt;p&gt;Now, a prototype of such a bot is available in a limited format at the following &lt;a href="https://t.me/SubgraphGPT_bot" rel="noopener noreferrer"&gt;link&lt;/a&gt;. As the API is paid, you can make up to 3 requests per day, but I don't think it will limit anyone, as the most interesting thing is not a specialized bot focused on a narrow topic, but the code of the &lt;strong&gt;AnythingGPT&lt;/strong&gt; project, which is available on &lt;a href="https://github.com/balakhonoff/AnythingGPT" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; with a short instruction on how to create your own bot to solve your specific task with your knowledge base based on this example.  Feel free to fork, contribute or just support the project with a star on &lt;a href="https://github.com/balakhonoff/AnythingGPT" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. Thank you for your attention and I hope this article has been helpful to you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3djogkvzeeou6ag46gl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3djogkvzeeou6ag46gl.png" alt="Screenshot of daily communication with a bot"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>chatgpt</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Web3 Indexing: The Ultimate Guide (No Prior Knowledge Required)</title>
      <dc:creator>Kirill Balakhonov</dc:creator>
      <pubDate>Mon, 24 Jul 2023 19:30:09 +0000</pubDate>
      <link>https://dev.to/balakhonoff/web3-indexing-the-ultimate-guide-no-prior-knowledge-required-1imb</link>
      <guid>https://dev.to/balakhonoff/web3-indexing-the-ultimate-guide-no-prior-knowledge-required-1imb</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7JogYol3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fcukn6a0gy7p8pr5g7du.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7JogYol3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fcukn6a0gy7p8pr5g7du.png" alt="''" width="800" height="522"&gt;&lt;/a&gt;&lt;br&gt;
It’s hard to say that the data engineering culture is deeply ingrained in the Web3 developer community. And not every developer can easily determine what indexing means in the context of Web3. I would like to define some details on this topic and talk about The Graph which has become the de facto industry standard for accessing data on the blockchain for DApp builders.&lt;/p&gt;

&lt;p&gt;Let’s start with indexing.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Indexing in databases is the process of creating a data structure that sorts and organizes the data in a database in such a way that search queries can be executed efficiently. By creating an index on a database table, the database server can more quickly search and retrieve the data that matches the criteria specified in a query. This helps to improve the performance of the database and reduces the time it takes to retrieve information.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;But what about indexing in blockchains? The most popular blockchain architecture is EVM (&lt;a href="https://hackernoon.com/an-intro-to-the-ethereum-virtual-machine-evm?ref=hackernoon.com"&gt;Ethereum Virtual Machine&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Ethereum Virtual Machine (EVM) is a runtime environment that executes smart contracts on the Ethereum blockchain. It is a computer program that runs in every node on the Ethereum network. It is responsible for executing the code of smart contracts and also provides security features such as sandboxing and gas usage control. The EVM ensures that all participants on the Ethereum network can execute smart contracts in a consistent and secure way.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As you might know, data on the blockchain is stored as blocks with transactions inside. Also, you might know that there are two types of accounts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Externally owned account — described by any ordinary wallet address.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contract account — described by any deployed smart contract address.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--37z6mxrM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fikdpxn2ghk82r18748.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--37z6mxrM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fikdpxn2ghk82r18748.png" alt="''" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you send some ether from your account to any other external owner account — there is nothing behind the scenes. But if you send some ether to a smart contract address with any payload you actually run some method on the smart contract that is actually creating some “internal” transactions.&lt;/p&gt;

&lt;p&gt;Okay, if any transaction can be found on the blockchain, why not transform all the data into a big constantly updating database which can be queried in SQL-like format?&lt;/p&gt;

&lt;p&gt;The problem is that you can access the data of a smart contract only if you have a “key” to decipher it. Without this “key,” the data of smart contracts on the blockchain is actually a mess. This key is called ABI (Application Binary Interface).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;ABI (Application Binary Interface) is a standard that defines the way a smart contract communicates with the outside world, including other smart contracts and user interfaces. It defines the data structure, function signatures, and argument types of a smart contract to enable correct and efficient communication between the contract and its users.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Any smart contract on-chain has an ABI. The problem is that you might not have an ABI for a smart contract that you are interested in. Sometimes, you can find an ABI file (which is actually a JSON file with the names of functions and variables of a smart contract — like an interface to communicate with)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;on Etherscan (if the smart contract has been verified)&lt;/li&gt;
&lt;li&gt;on GitHub (if the developers open-sourced the project)&lt;/li&gt;
&lt;li&gt;or if a smart contract relates to any standard type like ERC-20, ERC-721, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course, if you are a developer of a smart contract, you have the ABI, because it is generated while compiling.&lt;/p&gt;
&lt;h2&gt;
  
  
  How it looks like from the developer’s side
&lt;/h2&gt;

&lt;p&gt;But let’s not stop at the concept of ABI. What if we look at this topic from the smart contract developer side? What is a smart contract? The answer is much easier than you thought. Here is a simple explanation for anybody who is familiar with Object-oriented Programming:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A smart contract in the code of a developer is a class with fields and methods (for EVM-compatible chains smart contracts are usually written in Solidity). And the smart contract which has been deployed on-chain becomes an object of this class. So it lives its life allowing users to call its methods and change its internal fields.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What is worth highlighting is that any method call with the change in the state of a smart contract means a transaction which is usually followed by an event that a developer emits right from the code. Let’s illustrate a function call of the ERC-721 (a usual standard for non-fungible token collections like BoredApeYachtClub) smart contract which emits an event while transferring ownership of an NFT.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/**
     * @dev Transfers `tokenId` from `from` to `to`.
     *  As opposed to {transferFrom}, this imposes no restrictions on msg.sender.
     *
     * Requirements:
     *
     * - `to` cannot be the zero address.
     * - `tokenId` token must be owned by `from`.
     *
     * Emits a {Transfer} event.
     */
    function _transfer(address from, address to, uint256 tokenId) internal virtual {
        address owner = ownerOf(tokenId);
        if (owner != from) {
            revert ERC721IncorrectOwner(from, tokenId, owner);
        }
        if (to == address(0)) {
            revert ERC721InvalidReceiver(address(0));
        }

        _beforeTokenTransfer(from, to, tokenId, 1);

        // Check that tokenId was not transferred by `_beforeTokenTransfer` hook
        owner = ownerOf(tokenId);
        if (owner != from) {
            revert ERC721IncorrectOwner(from, tokenId, owner);
        }

        // Clear approvals from the previous owner
        delete _tokenApprovals[tokenId];

        // Decrease balance with checked arithmetic, because an `ownerOf` override may
        // invalidate the assumption that `_balances[from] &amp;gt;= 1`.
        _balances[from] -= 1;

        unchecked {
            // `_balances[to]` could overflow in the conditions described in `_mint`. That would require
            // all 2**256 token ids to be minted, which in practice is impossible.
            _balances[to] += 1;
        }

        _owners[tokenId] = to;

        emit Transfer(from, to, tokenId);

        _afterTokenTransfer(from, to, tokenId, 1);
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What we can see here. To transfer an NFT from your address to any other address you need to call a function _transfer passing the values of these two addresses and the ID of this NFT. In the code, you can see that there will be carried out some checking and then the balances of the users will be changed. But the important thing here is that in the end of the function code, there is a line&lt;/p&gt;

&lt;p&gt;&lt;code&gt;emit Transfer(from, to, tokenId);&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;It means that these three values will be “emitted” outside and can be found in the logs of the blockchain. It is much more efficient to save the historical data you need this way because it is too expensive to store data right on the blockchain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now we’ve defined all needed conceptions to show what indexing is.
&lt;/h2&gt;

&lt;p&gt;Considering the fact that any smart contract (being an object of some class) living its life constantly being called by users (and other smart contracts) and changing state (emitting the events at the same time), we can define the indexing as a process of collecting a smart contract data (any internal variables inside of it and not only those which are emitted explicitly) during its lifetime saving this data together with the transaction ids (hash) and block numbers to be able to find any details about it in the future.&lt;/p&gt;

&lt;p&gt;And it is crucial to note because it is just impossible to access, for instance, the first transaction of wallet “A” with a token “B” or the biggest transaction in the smart contract “C” (or any other stuff) if a smart contract doesn’t store this data explicitly (as we know it is super expensive).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s why we need indexing. The simple things that we can do in an SQL database become impossible in the blockchain. Without indexing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In other words “indexing” here is a synonym for smart contract data collection because no indexing means no data access in Web3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How developers did indexing in the past. They did it from scratch:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;They write high-performance code on some fast programming languages like Go, Rust, etc.&lt;/li&gt;
&lt;li&gt;They set up a database to store the data.&lt;/li&gt;
&lt;li&gt;They set up an API to make the data accessible from an application.&lt;/li&gt;
&lt;li&gt;They spin up an archival blockchain node.&lt;/li&gt;
&lt;li&gt;In the first stage, they go over the entire blockchain finding all the transactions related to a particular smart contract.&lt;/li&gt;
&lt;li&gt;They process these transactions by storing new entities and refreshing existing entities in the database.&lt;/li&gt;
&lt;li&gt;When they reach the chain head they need to switch to a more complex mode to process new transactions because each new block (even a chain of blocks) can be rejected due to a chain reorganization.&lt;/li&gt;
&lt;li&gt;If the chain has been reorganized they need to get back to the fork block and recalculate everything to the new chain head.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As you can notice it is not easy not just to develop but also to maintain in real-time because each node glitch can require some steps to achieve data consistency again. That’s actually the reason why &lt;a href="https://thegraph.com/"&gt;The Graph&lt;/a&gt; has appeared. It is a simple idea that developers along with the end users need access to smart contract data easily without all this hassle.&lt;/p&gt;

&lt;p&gt;The Graph project has defined a paradigm called “subgraph” that to extract smart contract data from the blockchain you need to describe 3 things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;General parameters like what blockchain to use, what smart contract address to index, what events to handle, and from what start block to begin. These variables are defined in a so-called “manifest” file.&lt;/li&gt;
&lt;li&gt;How to store the data. What tables should be created in a database to keep the data from a smart contract? The answer will be found in the “schema” file.&lt;/li&gt;
&lt;li&gt;How to collect the data. Which variables should be saved from events, and what accompanying data (like transaction hash, block number, a result of other method calls, etc.) should be also collected, and how do they need to be put into the schemas we defined?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;These three things can be elegantly defined in the three following files:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;subgraph.yaml — manifest file&lt;/li&gt;
&lt;li&gt;schema.graphql — schema description&lt;/li&gt;
&lt;li&gt;mapping.ts — AssemblyScript file&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thanks to this standard it is extremely easy to describe the entire indexing following any of these tutorials:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.chainstack.com/docs/subgraphs-tutorial-a-beginners-guide-to-getting-started-with-the-graph"&gt;A beginner’s guide to getting started with The Graph&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.chainstack.com/docs/subgraphs-tutorial-working-with-schemas"&gt;Explaining Subgraph schemas&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;And how does it look like then:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3I1krtFI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9jev6ofbliw0zo9b0c0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3I1krtFI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9jev6ofbliw0zo9b0c0.png" alt="''" width="732" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see here The Graph takes care of the indexing stuff. But you still need to run a &lt;a href="https://github.com/graphprotocol/graph-node"&gt;graph-node&lt;/a&gt; (which is open-source software by The Graph). And here goes another paradigm shift.&lt;/p&gt;

&lt;p&gt;As developers in the past had been running their own blockchain nodes and stopped doing it taking over this hassle to the blockchain node providers. The Graph showed another architectural simplification. The Graph hosted service which looks for a developer (“user” here) this way:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EzxCnlwb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dzoemw3pj3gatjoi1ldj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EzxCnlwb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dzoemw3pj3gatjoi1ldj.png" alt="''" width="800" height="611"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, the user (or developer) doesn’t need to run their own indexer or graph node but still can control all the algorithms and not even get into a vendor lock, because different providers use the same Graph description format (&lt;a href="https://chainstack.com/subgraphs/"&gt;Chainstack&lt;/a&gt; is fully compatible with The Graph subgraph hosting, but it is worth checking this statement with your web3 infrastructure provider). And this is a big deal because it helps developers speed up the development process and reduce operational maintenance costs.&lt;/p&gt;

&lt;p&gt;But what is also cool in this paradigm is that any time a developer would like to make their application truly decentralized they can seamlessly migrate to The Graph decentralized network using the same subgraphs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I missed in the previous narrative.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;As you may notice The Graph uses GraphQL instead of REST API. It allows users to flexible queries to any tables they created combining them and filtering with ease. Here is a good &lt;a href="https://www.youtube.com/watch?v=ZQL7tL2S0oQ&amp;amp;ab_channel=WebDevSimplified"&gt;video&lt;/a&gt; on how to master it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Graph has its own &lt;a href="https://thegraph.com/hosted-service/"&gt;hosted service&lt;/a&gt; with a lot of ready-to-use subgraphs. It is free, but unfortunately doesn’t fit any production requirements (reliability, SLA, support), and syncing is much slower than paid solutions but still can be used for development. The tutorial on how to use these ready-to-use subgraphs with Python can be found &lt;a href="https://hackernoon.com/accessing-real-time-smart-contract-data-from-python-code-using-lido-contract-as-an-example"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>web3</category>
      <category>smartcontract</category>
      <category>solidity</category>
      <category>ethereum</category>
    </item>
    <item>
      <title>How to Create a Telegram Bot to Monitor Your Service Uptime in Python (Part 1: Instant Metrics)</title>
      <dc:creator>Kirill Balakhonov</dc:creator>
      <pubDate>Mon, 24 Jul 2023 19:12:55 +0000</pubDate>
      <link>https://dev.to/balakhonoff/how-to-create-a-telegram-bot-to-monitor-your-service-uptime-in-python-part-1-instant-metrics-5a19</link>
      <guid>https://dev.to/balakhonoff/how-to-create-a-telegram-bot-to-monitor-your-service-uptime-in-python-part-1-instant-metrics-5a19</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9j2L1Gez--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jxb4qvwvc1gpwmfx8olz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9j2L1Gez--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jxb4qvwvc1gpwmfx8olz.png" alt="''" width="800" height="428"&gt;&lt;/a&gt;&lt;br&gt;
Hello everyone! For several years now, I have been writing various "assistant" telegram bots for myself in Python that handle various small routine tasks for me - notifying me about something, checking service uptime, forwarding interesting content from telegram channels and chats, and so forth.&lt;/p&gt;

&lt;p&gt;This is convenient because the phone is always at hand, and being able to fix something on the server without even opening my laptop brings me particular pleasure.&lt;/p&gt;

&lt;p&gt;In general, I have accumulated a lot of different small project templates that I want to share with dev.to readers.&lt;/p&gt;

&lt;p&gt;I'll say right away that the examples may be niche in terms of their application "as is", but I will mark those places where, by changing a few lines of code to your own, you will be able to reuse most of the developments for your projects.&lt;/p&gt;

&lt;p&gt;I completed this specific project a few days ago, and it has already brought me a lot of benefits. I work at a Web3 infrastructure provider chainstack.com, dealing with a service for indexing data from smart contracts on EVM blockchains.&lt;/p&gt;

&lt;p&gt;And the quality of the service being developed critically depends on how "well" the nodes from which the service retrieves data online are functioning.&lt;/p&gt;

&lt;p&gt;I spent many hours trying to use ready-made tools that our infrastructure division uses, such as Grafana, BetterUptime, and others, but as I have little interest in the system's internals, with the main focus for me being the metrics at the entrance and the exit, I decided to write my own bot, which would do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At my request, it would go to the service, check the metrics, and send me a brief report on the current situation.&lt;/li&gt;
&lt;li&gt;At my other request, it would send me graphs of what has been happening over the last X hours.&lt;/li&gt;
&lt;li&gt;In case of a special situation, it would send me a notification that something is happening at that moment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this article, I will focus on the first part, that is, receiving metrics on request.&lt;/p&gt;

&lt;p&gt;We will need a new virtual environment for work.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~ 
virtualenv -p python3.8 up_env  # crete a virtualenv
source ~/up_env/bin/activate  # activate the virtualenl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install python-telegram-bot
pip install "python-telegram-bot[job-queue]" --pre
pip install --upgrade python-telegram-bot==13.6.0  # the code was written before version 20, so here the version is explicitly specified

pip install numpy # needed for the median value function
pip install web3 # needed for requests to nodes (replace with what you need)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;File with functions functions.py (you can implement it with classes, but since the example is short, I did not plan to divide it into modules, but a multi-threading library requires functions to be moved to a separate file). Import dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
import multiprocessing

from web3 import Web3 #  add those libraries needed for your task
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Describing a function for checking the state. In my case, it involved looping through pre-selected public nodes, retrieving their last block, taking the median value to filter out any deviations, and then, checking our own node against this median.&lt;/p&gt;

&lt;p&gt;Service state checking function (you can replace it with your own):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Helper function that checks a single node
def get_last_block_once(rpc):
    try:
        w3 = Web3(Web3.HTTPProvider(rpc))
        block_number = w3.eth.block_number
        if isinstance(block_number, int):
            return block_number
        else:
            return None
    except Exception as e:
        print(f'{rpc} - {repr(e)}')
        return None


# Main function to check the status of the service that will be called
def check_service():
    # pre-prepared list of reference nodes
    # for any network, it can be found on the website https://chainlist.org/
    list_of_public_nodes = [
        'https://polygon.llamarpc.com',
        'https://polygon.rpc.blxrbdn.com',
        'https://polygon.blockpi.network/v1/rpc/public',
        'https://polygon-mainnet.public.blastapi.io',
        'https://rpc-mainnet.matic.quiknode.pro',
        'https://polygon-bor.publicnode.com',
        'https://poly-rpc.gateway.pokt.network',
        'https://rpc.ankr.com/polygon',
        'https://polygon-rpc.com'
    ]

    # parallel processing of requests to all nodes
    with multiprocessing.Pool(processes=len(list_of_public_nodes)) as pool:
        results = pool.map(get_last_block_once, list_of_public_nodes)
        last_blocks = [b for b in results if b is not None and isinstance(b, int)]

    # define the maximum and median value of the current block
    med_val = int(np.median(last_blocks))
    max_val = int(np.max(last_blocks))
    # determine the number of nodes with the maximum and median value
    med_support = np.sum([1 for x in last_blocks if x == med_val])
    max_support = np.sum([1 for x in last_blocks if x == max_val])

    return max_val, max_support, med_val, med_support
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next important file of the bot is uptime_bot.py. We import libraries and functions from the file above and set the necessary constants:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import telegram
from telegram.ext import Updater, CommandHandler, Filters

from functions import get_last_block_once, check_service

# Here one can to set a limited circle of bot users, 
# listing the usernames of the users

ALLOWED_USERS = ['your_telegram_account', 'someone_else']
# The address of the node that I am monitoring (also a public node in this case)
OBJECT_OF_CHECKING = 'https://polygon-mainnet.chainstacklabs.com'
# Threshold for highlighting critical lag
THRESHOLD = 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let's describe a function that will be called when the command is issued from the bot's UI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def start(update, context):
    """Send a message when the command /start is issued."""

    try:
        # Get the user
        user = update.effective_user

        # Filter out bots
        if user.is_bot:
            return

        # Check if the user is allowed
        username = str(user.username)
        if username not in ALLOWED_USERS:
            return
    except Exception as e:
        print(f'{repr(e)}')
        return

    # Call the main function to check the network status
    max_val, max_support, med_val, med_support = check_service()
    # Call the function to check the status of the specified node
    last_block = get_last_block_once(OBJECT_OF_CHECKING)

    # Create the message to send to Telegram
    message = ""

    # Information about the state of the nodes in the public network (median, maximum, and number of nodes)
    message += f"Public median block number {med_val} (on {med_support}) RPCs\n"
    message += f"Public maximum block number +{max_val - med_val} (on {max_support}) PRCs\n"

     # Compare with the threshold
    if last_block is not None:
        out_text = str(last_block - med_val) if last_block - med_val &amp;lt; 0 else '+' + str(last_block - med_val)

        if abs(last_block - med_val) &amp;gt; THRESHOLD:
            message += f"The node block number shift ⚠️&amp;lt;b&amp;gt;{out_text}&amp;lt;/b&amp;gt;⚠️"
        else:
            message += f"The node block number shift {out_text}"
    else: # Exception processing if a node has not responded
        message += f"The node has ⚠️&amp;lt;b&amp;gt;not responded&amp;lt;/b&amp;gt;⚠️"

    # Send the message to the user
    context.bot.send_message(chat_id=user.id, text=message, parse_mode="HTML")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, all that's left is to add the part where the bot is initialized, and the handler function is connected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;token = "xxx"  # Bot token obtained from BotFather

# set up the bot
bot = telegram.Bot(token=token)
updater = Updater(token=token, use_context=True)
dispatcher = updater.dispatcher

# bind the handler function
dispatcher.add_handler(CommandHandler("start", start, filters=Filters.chat_type.private))

# run the bot
updater.start_polling()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, you can run the code on a cheap VPS server using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source ~/up_env/bin/activate
python uptime_bot.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After configuring the systemd unit file.&lt;/p&gt;

&lt;p&gt;As a result, the bot's work will look like this.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If everything is fine:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KMenvISK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4riwchis7y22gx9pgmyp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KMenvISK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4riwchis7y22gx9pgmyp.png" alt="''" width="798" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;And if the lag becomes too large, then as follows:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K6xMHBFp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y1ghu6hx15fy79v4ipf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K6xMHBFp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y1ghu6hx15fy79v4ipf5.png" alt="''" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the following articles, I will describe how to implement the two remaining tasks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Retrieve graphs on request showing the events that occurred over the last X hours.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Receive an alert indicating that something is currently happening and requires action.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The project's source code is available in the GitHub &lt;a href="https://github.com/balakhonoff/rpc_node_telegram_checker"&gt;repository&lt;/a&gt;. If you found this tutorial helpful, feel free to give it a star on GitHub, I would appreciate it🙂&lt;/p&gt;

</description>
      <category>python</category>
      <category>web3</category>
      <category>linux</category>
      <category>telegram</category>
    </item>
  </channel>
</rss>
