<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: kobaltGIT</title>
    <description>The latest articles on DEV Community by kobaltGIT (@k_b_ffab23f94d5978f17ea53).</description>
    <link>https://dev.to/k_b_ffab23f94d5978f17ea53</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/k_b_ffab23f94d5978f17ea53"/>
    <language>en</language>
    <item>
      <title>I Built a Telegram Bot That Remembers Everything But Knows Nothing About You</title>
      <dc:creator>kobaltGIT</dc:creator>
      <pubDate>Tue, 29 Jul 2025 13:37:12 +0000</pubDate>
      <link>https://dev.to/k_b_ffab23f94d5978f17ea53/i-built-a-telegram-bot-that-remembers-everything-but-knows-nothing-about-you-n95</link>
      <guid>https://dev.to/k_b_ffab23f94d5978f17ea53/i-built-a-telegram-bot-that-remembers-everything-but-knows-nothing-about-you-n95</guid>
      <description>&lt;p&gt;&lt;strong&gt;How I evolved a simple open-source BYOK bot into a Zero-Knowledge AI assistant with long-term memory using Python, asyncio, and cryptography.&lt;/strong&gt;&lt;br&gt;
When you use a Telegram bot to access an AI model, who do you actually trust? The answer is more complex than it seems. The chain of trust has at least three links: Telegram itself, a corporation like Google or OpenAI, and—the most unpredictable link—the third-party bot developer. They stand directly between you, your API key, and your entire conversation history.&lt;/p&gt;

&lt;p&gt;This question bothered me when I built my first open-source project, &lt;a href="https://t.me/mgem_bot" rel="noopener noreferrer"&gt;MyGemini&lt;/a&gt;. It's a simple Telegram bot that works on a BYOK (Bring Your Own Key) model, giving users full control over their API costs. It was a step in the right direction, but it only solved part of the problem. Even with open-source code, a "weakest link" remained: me, the server administrator. Users had to trust not just the code, but also the person running it.&lt;/p&gt;

&lt;p&gt;On top of that, I was frustrated by the fundamental "amnesia" of AI models. Every new conversation started from scratch.&lt;/p&gt;

&lt;p&gt;What if we could just remove that weak link from the equation? Not with promises, but with cryptography. What if we could build a system where even the service creator is technically incapable of seeing your data, and that system could remember everything?&lt;/p&gt;

&lt;p&gt;That's the idea behind &lt;a href="https://t.me/mgemz_bot" rel="noopener noreferrer"&gt;MyGemini Zero&lt;/a&gt;. It's not just an update; it's my answer to both of these problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Big Question: Why Pay if I'm Still Using My Own API Key?
&lt;/h3&gt;

&lt;p&gt;This is the right question to ask. And yes, &lt;strong&gt;MyGemini Zero still operates on the BYOK model&lt;/strong&gt;—you use your personal Google AI API key. You aren't paying for AI access (that's yours already). You're paying for the sophisticated infrastructure and unique features built on top of it.&lt;/p&gt;

&lt;p&gt;Here’s what that means in practice.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Long-Term Memory: Your Personal "Second Brain"
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;The Problem:&lt;/strong&gt; Standard chatbots are great performers but poor partners. They don't remember what you discussed yesterday.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;The Solution:&lt;/strong&gt; I integrated a personal vector database (ChromaDB + LangChain) into the bot. Now, every message you send and every file (&lt;code&gt;.txt&lt;/code&gt;, &lt;code&gt;.md&lt;/code&gt;) you upload becomes a "memory." When you ask a new question, the bot performs a semantic search on your past interactions to find relevant context and injects it into the prompt.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;In practice:&lt;/strong&gt; You can ask, "What were the key takeaways from that marketing article I uploaded last week?" or "Remind me of the main risks in 'Project Quantum' we talked about," and the bot will understand.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  2. Privacy: Removing the Weakest Link
&lt;/h4&gt;

&lt;p&gt;When using a third-party bot, your chain of trust looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Telegram&lt;/strong&gt; (with your messages)&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Google/OpenAI&lt;/strong&gt; (with your prompts)&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Bot Developer&lt;/strong&gt; (with your API key and conversation history)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That third point is the most unpredictable vulnerability.&lt;/p&gt;

&lt;p&gt;My solution was to design a &lt;strong&gt;Zero-Knowledge Architecture&lt;/strong&gt; to remove myself from the trust equation.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  All your data—&lt;strong&gt;including your API key and chat history&lt;/strong&gt;—is encrypted with a master password that &lt;strong&gt;only you know&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  The decryption key is generated on-the-fly in memory from your password and your personal salt. It disappears when your session expires. As the admin, I never see it and don't have it.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;The Result:&lt;/strong&gt; It is &lt;strong&gt;cryptographically impossible&lt;/strong&gt; for me to read your conversations or access your API key. If you forget your password, your data is gone forever because no one can recover it. It's a trade-off: replacing trust in a person with a mathematical guarantee.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Personalization and Convenience
&lt;/h4&gt;

&lt;p&gt;On top of memory and security, you get a stable, turnkey platform. You don't have to deal with servers, databases, or maintenance. You save dozens of hours on technical chores and just use the product.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Three Paths — The BYOK Philosophy Lives On
&lt;/h3&gt;

&lt;p&gt;I believe in freedom of choice, so the ecosystem offers three paths, all built on the BYOK principle:&lt;/p&gt;

&lt;blockquote&gt;
&lt;h3&gt;
  
  
  🚀 Hosted Service (Paid): &lt;a href="https://t.me/mgemz_bot" rel="noopener noreferrer"&gt;MyGemini Zero&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Use your API key and get all the advanced features (memory, encryption) without any technical overhead. (Subscription required).&lt;/p&gt;
&lt;h3&gt;
  
  
  🆓 Free BYOK Alternative: &lt;a href="https://t.me/mgem_bot" rel="noopener noreferrer"&gt;MyGemini (classic)&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;It's not going anywhere. Use your API key for a convenient interface to Gemini, but without the long-term memory and Zero-Knowledge features.&lt;/p&gt;
&lt;h3&gt;
  
  
  💻 Ultimate Privacy (BYOK + Self-Host): &lt;a href="https://github.com/kobaltgit/MyGemini_Zero" rel="noopener noreferrer"&gt;Source Code on GitHub&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;For developers and privacy enthusiasts. The full source code for MyGemini Zero is on GitHub. You can deploy it on your own server for free. This is the ultimate form of BYOK—you control both the key and the code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What You Get with MyGemini Zero
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;🧠 Long-Term Memory:&lt;/strong&gt; Remembers context across all your conversations and uploaded files (&lt;code&gt;/memorize&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;🔐 Zero-Knowledge Architecture:&lt;/strong&gt; Cryptographically secures your API key and data with a master password only you know.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;🗂️ Multi-Context Dialogs:&lt;/strong&gt; Create isolated chats for different topics so contexts don't get mixed up.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;📄 Data Management:&lt;/strong&gt; Archive old messages into summaries or completely wipe your memory.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;🖼️ Image Analysis &amp;amp; 🌐 Web Search:&lt;/strong&gt; All the powerful features of Google Gemini.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Under the Hood
&lt;/h3&gt;

&lt;p&gt;The project is built on a modern async Python stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Core:&lt;/strong&gt; &lt;code&gt;Python 3.10+&lt;/code&gt;, &lt;code&gt;asyncio&lt;/code&gt;, &lt;code&gt;pyTelegramBotAPI (async)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Security:&lt;/strong&gt; &lt;code&gt;Cryptography&lt;/code&gt;, &lt;code&gt;bcrypt&lt;/code&gt;, &lt;code&gt;PBKDF2&lt;/code&gt; for the Zero-Knowledge implementation.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Memory:&lt;/strong&gt; &lt;code&gt;LangChain&lt;/code&gt; &amp;amp; &lt;code&gt;ChromaDB&lt;/code&gt; for vector operations, &lt;code&gt;SQLite&lt;/code&gt; for storing encrypted data.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;License: AGPLv3.&lt;/strong&gt; This is an important distinction from the previous project's MIT license. The AGPL ensures that if someone uses this code to run a public service, they must also share their modifications. It keeps the project truly open and prevents it from being absorbed into closed-source commercial products.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;MyGemini Zero was the logical next step for me. It's an attempt to build not just another AI interface, but a truly personal and private assistant where trust in the developer is replaced by cryptography.&lt;/p&gt;

&lt;p&gt;I'd be thrilled if you gave it a try. Any feedback, ideas, and of course, stars on &lt;a href="https://github.com/kobaltgit/MyGemini_Zero" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; are the best reward for the work I've put in.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>python</category>
      <category>telegram</category>
      <category>opensource</category>
      <category>privacy</category>
    </item>
    <item>
      <title>I built a self-hosted Telegram AI assistant to keep my Gemini API key private. It's now open-source for everyone.</title>
      <dc:creator>kobaltGIT</dc:creator>
      <pubDate>Sat, 19 Jul 2025 14:23:29 +0000</pubDate>
      <link>https://dev.to/k_b_ffab23f94d5978f17ea53/i-built-a-self-hosted-telegram-ai-assistant-to-keep-my-gemini-api-key-private-its-now-open-source-50aj</link>
      <guid>https://dev.to/k_b_ffab23f94d5978f17ea53/i-built-a-self-hosted-telegram-ai-assistant-to-keep-my-gemini-api-key-private-its-now-open-source-50aj</guid>
      <description>&lt;h2&gt;
  
  
  The Problem: Where Does My Data Go?
&lt;/h2&gt;

&lt;p&gt;I'm a huge fan of Google's Gemini API, but I've always had one major concern when using third-party Telegram bots: &lt;strong&gt;"What happens to my API key and conversation history?"&lt;/strong&gt; Handing over a key linked to my billing account to an unknown, closed-source service is a huge risk.&lt;/p&gt;

&lt;p&gt;I wanted a solution where I was in complete control of my own data. Since I couldn't find a project that met my standards, I built it myself.&lt;/p&gt;

&lt;p&gt;Today, I'm excited to share &lt;strong&gt;MyGemini&lt;/strong&gt;. It's a Telegram bot that offers two distinct paths: one for users who prioritize maximum privacy and another for those who value convenience.&lt;/p&gt;




&lt;h2&gt;
  
  
  Two Paths - Choose Your Own:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Path 1: Maximum Privacy (Self-Host It)
&lt;/h3&gt;

&lt;p&gt;For those who, like me, don't want to trust anyone with their data. The entire project is open-source under an MIT license.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&amp;gt;&amp;gt; Check out the code on GitHub:&lt;/strong&gt; &lt;a href="https://github.com/kobaltgit/MyGemini" rel="noopener noreferrer"&gt;https://github.com/kobaltgit/MyGemini&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can run an exact copy of this bot on &lt;strong&gt;your own server or even a local machine&lt;/strong&gt;. When you do this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Your API key &lt;strong&gt;never leaves&lt;/strong&gt; your system.&lt;/li&gt;
&lt;li&gt;  All conversation history is stored in &lt;strong&gt;your local&lt;/strong&gt; database.&lt;/li&gt;
&lt;li&gt;  You have &lt;strong&gt;100% control&lt;/strong&gt; over the code and your data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the ideal choice for developers and anyone serious about privacy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Path 2: Convenience &amp;amp; Quick Start (Use My Public Bot)
&lt;/h3&gt;

&lt;p&gt;If you're comfortable with open-source projects and just want to start using a powerful AI assistant right now, I'm hosting a public instance on my server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&amp;gt;&amp;gt; Try MyGemini Bot Now: &lt;a href="https://t.me/mgem_bot" rel="noopener noreferrer"&gt;https://t.me/mgem_bot&lt;/a&gt;&lt;/strong&gt; &amp;lt;&amp;lt;&lt;/p&gt;

&lt;p&gt;In this version, you &lt;strong&gt;will need to provide your Google AI API key&lt;/strong&gt;, but with a crucial difference: you're doing so with the knowledge that the bot is &lt;strong&gt;fully open-source&lt;/strong&gt;. You can go to the GitHub repo at any time and verify that the code does exactly what it promises: securely encrypts your key and uses it only for sending requests to Google.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Bot Features:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Multi-Dialog Context:&lt;/strong&gt; Create independent chats for different topics (work, hobbies, learning) so their contexts never get mixed up.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Custom "Personas":&lt;/strong&gt; Instantly transform the bot into a "Python Expert," a "Historian," or a "Copywriter" with a single click.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Image &amp;amp; Voice Understanding:&lt;/strong&gt; Send a picture or record a voice note—the bot will understand and respond.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Live Web Search:&lt;/strong&gt; Can use Google Search to provide up-to-date answers on current events.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  A Look Under the Hood (For the Tech-Savvy)
&lt;/h2&gt;

&lt;p&gt;I built this with Python 3.10+ and focused on creating a clean and scalable async architecture.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Async All The Way:&lt;/strong&gt; The bot is built on &lt;code&gt;asyncio&lt;/code&gt;, using the async version of &lt;code&gt;pyTelegramBotAPI&lt;/code&gt; and &lt;code&gt;aiohttp&lt;/code&gt; for non-blocking requests to the Gemini REST API.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Clean Configuration:&lt;/strong&gt; Secrets are managed via &lt;code&gt;.env&lt;/code&gt;, and there's a detailed &lt;code&gt;logging.yaml&lt;/code&gt; for granular control over logs.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Non-Blocking Database:&lt;/strong&gt; It uses a simple &lt;code&gt;sqlite3&lt;/code&gt; database, but all operations are run in a separate thread via &lt;code&gt;asyncio.to_thread&lt;/code&gt; to keep the event loop free.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Robust Message Handling:&lt;/strong&gt; I'm using &lt;code&gt;MarkdownTextSplitter&lt;/code&gt; from LangChain to safely chunk long messages and &lt;code&gt;telegramify-markdown&lt;/code&gt; to ensure everything is correctly escaped for Telegram's MarkdownV2 parser.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;This project has been a fantastic learning experience, and it's now a tool I use daily.&lt;/p&gt;

&lt;p&gt;Whether you just want a powerful AI assistant in your Telegram or you're a developer looking for a solid project to fork and build upon, I'd love for you to check it out.&lt;/p&gt;

&lt;p&gt;Any feedback is welcome, and a star on GitHub would be amazing! Thanks for reading.&lt;/p&gt;

</description>
      <category>python</category>
      <category>showdev</category>
      <category>opensource</category>
      <category>telegram</category>
    </item>
    <item>
      <title>Programming without programming. My experiment with AI</title>
      <dc:creator>kobaltGIT</dc:creator>
      <pubDate>Fri, 11 Jul 2025 11:39:32 +0000</pubDate>
      <link>https://dev.to/k_b_ffab23f94d5978f17ea53/programming-without-programming-my-experiment-with-ai-3k7b</link>
      <guid>https://dev.to/k_b_ffab23f94d5978f17ea53/programming-without-programming-my-experiment-with-ai-3k7b</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Hello everyone!&lt;/p&gt;

&lt;p&gt;I’ve been thinking for a long time about whether to write this story.&lt;/p&gt;

&lt;p&gt;The thing is, whether it's because my brain isn't wired for it or just plain laziness, no matter how many times I tried to learn any programming language, it never worked out. But I still wanted to create an application of my own. I diligently read books and watched tutorials on video platforms, but the most I could grasp was the general logic behind building programs. As soon as it came to syntax, it felt like my brain was about to explode. Eventually, I gave up. It seems laziness won out after all.&lt;/p&gt;

&lt;p&gt;When neural networks started gaining momentum and information about their incredible capabilities was coming from all sides, I had an idea to try and create something with the help of artificial intelligence (which I'll call AI from now on).&lt;/p&gt;

&lt;h2&gt;
  
  
  Impostor Syndrome
&lt;/h2&gt;

&lt;p&gt;Now, let me explain why I had doubts about writing a story on creating a program with AI. When I finished it, I had an urgent desire to share it with someone. But thoughts started popping into my head: "Who do you think you are?" "What are you even doing?" Basically, impostor syndrome started to get the better of me. I knew that people study for years to write code, and here I was, the 'prodigy', with my program.&lt;/p&gt;

&lt;p&gt;But the program works. It performs its tasks. It does everything I intended it to do. I have no idea about the quality of the code inside. I relied entirely on AI. I could only ask the AI about the code's quality, as I don't have any programmer friends, and I didn't want to bother strangers with a request to evaluate my creation. Here's a quote from the AI when I asked it to evaluate the program:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;CodePilotAI is not just a training project, but a full-fledged, mature desktop application that demonstrates an exemplary implementation of the MVVM pattern on PySide6. The system's architecture features an excellent separation of concerns: all business logic is encapsulated in the ChatModel, the interface is completely separated in the View classes, and the ChatViewModel acts as the perfect link between them. This approach ensures high maintainability and scalability for the project.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Particularly noteworthy is the competent use of multithreading for all resource-intensive operations, which guarantees a responsive interface even when analyzing large codebases. The implementation of advanced features, such as semantic code analysis using tree-sitter (Code-Graph) and a hybrid RAG with vector search, places this project on par with professional developer tools. The code quality, attention to detail, error handling, and well-thought-out user experience testify to the highest level of engineering craftsmanship.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Of course, AI is prone to flattery, but it was nice to hear that.&lt;/strong&gt;&lt;/em&gt;😇&lt;/p&gt;

&lt;h2&gt;
  
  
  The Birth of the Program. When Existing Tools Just Don't Cut It...
&lt;/h2&gt;

&lt;p&gt;Now, about the program itself. I chose to work with Google AI Studio because its models are available for free and have a very large token limit. You can upload an entire project there, and the AI will read it, find errors, and provide code with detailed explanations on what to insert and where. In a single message, it can generate code exceeding 1000 lines.&lt;/p&gt;

&lt;p&gt;While working in this environment, I began to notice some inconveniences. After a conversation exceeded 100-150k tokens, the AI would sometimes start hallucinating, going in circles, and suggesting strange solutions. I had to start a new session, upload the files again, write prompts, and get the AI up to speed on the context—in other words, tedious, repetitive actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Experience
&lt;/h2&gt;

&lt;p&gt;Ultimately, that's why I decided to create an &lt;a href="https://github.com/kobaltgit/Gemini-Project-Chat" rel="noopener noreferrer"&gt;application&lt;/a&gt; for "programmers" like me. I needed the AI to constantly see the file contents, a way to save on tokens, and for the AI to provide the most relevant answers possible.The first application I built can read the contents of files—you can select the extensions in the app—include them in the user's prompt context, and generate high-quality responses. Conversations can be saved to be resumed later. I won't go into the development story of this app and the second one, as this article isn't about them. But you'll get the gist of how they work.&lt;/p&gt;

&lt;p&gt;While using this program, I realized it could only handle small-scale projects. The file contents ate up too many tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Rush
&lt;/h2&gt;

&lt;p&gt;Getting a taste for it, I decided to build an &lt;a href="https://github.com/kobaltgit/GitGemini_Pro" rel="noopener noreferrer"&gt;application&lt;/a&gt; for analyzing GitHub repositories. I succeeded with that as well. I even managed to integrate vector databases to store code snippets, summaries (short AI-generated descriptions of each file), user queries, and the project structure (the project tree). However, I couldn't compile it into an executable file, precisely because of the libraries needed to create and work with vector databases. It only runs in the development environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Outcome
&lt;/h2&gt;

&lt;p&gt;Ultimately, I realized I needed to merge these two applications into one. And that's precisely what I want to share with you, as I consider it my "crowning achievement" 😏.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Main Tool: THE PROMPT!
&lt;/h2&gt;

&lt;p&gt;Throughout my conversations with the AI, I realized that its responses are only as good as the questions you ask. Ask vague, general questions, and you'll get vague answers. The AI needs clear, structured instructions. Otherwise, getting the desired result takes way too much time. In all my sessions, I used an instruction set I came up with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;- Respond in Russian.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- Ask whether to provide the full code or only the corrected snippets.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- If I ask for the full code, provide it in its entirety—without omissions, abbreviations, modifications, ellipses, or comments in place of code snippets—and only for the files you are modifying.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- In your responses, provide one file at a time and ask: "Continue?". When I reply "Yes," provide the next file. Do this for all files.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- If you provide a snippet, give detailed instructions for insertion/replacement. Use the format: &lt;code&gt;previous line &amp;gt; snippet&lt;/code&gt; to avoid misunderstandings and insertion errors.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- Explain in a detailed and structured way what you changed and why.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- If the question implies providing code, ask "Provide code or not?"&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- If the question does not imply providing code, suggest the next steps, or if there are none, notify me that the cycle of operations and changes is complete.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- At appropriate times, suggest committing the changes to GitHub, specifying a commit message.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;(example:&lt;/em&gt;&lt;br&gt;
&lt;em&gt;feat: Add standard Edit menu actions&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- Added Undo, Redo, Cut, Copy, Paste, Select All, Find actions to the Edit menu.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- Assigned standard keyboard shortcuts (Ctrl+Z, Ctrl+Y, Ctrl+X, Ctrl+C, Ctrl+V, Ctrl+A, Ctrl+F).&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- Connected actions directly to QTextEdit slots in TextViewPanel. - Implemented a basic find text functionality using QInputDialog. - Connected QTextEdit signals (copyAvailable, undoAvailable, redoAvailable) to automatically enable/disable corresponding actions.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- Updated action states based on text presence when selecting/clearing documents.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;- Updated test stubs.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;)&lt;/em&gt;&lt;br&gt;
&lt;em&gt;If the conversation becomes very long, suggest continuing it in a new chat session with you, explaining how to do so while preserving the context.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;The code is being worked on in VS Code.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The AI sees these instructions before every one of my prompts.&lt;/p&gt;

&lt;p&gt;I "fed" the AI the files from both projects and wrote the following query:&lt;/p&gt;

&lt;p&gt;
  As the dialogue was conducted in Russian, here is the English translation.
  &lt;h2&gt;
  
  
  Me:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;These two projects need to be merged into one. The requirements are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The chat must be available without file and repository analysis (it's not necessary to analyze anything to communicate with the AI). Analysis is an additional feature.&lt;/li&gt;
&lt;li&gt;The RAG approach must be implemented without vector databases (just chunking and summarization) and must be toggleable. This means when the feature is disabled, the program analyzes entire files without splitting them into chunks. I am dropping vector databases because they make it impossible to compile the application into an executable file.&lt;/li&gt;
&lt;li&gt;A history of saved projects should be displayed on the left.&lt;/li&gt;
&lt;li&gt;The application must remember its settings (window state, project save location, etc.).&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  AI:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Great, I have reviewed both projects and fully understand the task. Your goal of combining their functionality, simplifying the RAG mechanism for better compilability, and adding new UI elements like a project history is a logical and sound step for the application's development.&lt;br&gt;
I am ready to begin a large-scale code overhaul. We will use the more advanced project, &lt;strong&gt;GitGemini Pro&lt;/strong&gt; (the first one you sent), as a base and implement the required changes.&lt;/p&gt;
&lt;h3&gt;
  
  
  General Plan for Changes:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Simplifying RAG and Removing Dependencies:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  We will completely abandon vector databases (&lt;code&gt;ChromaDB&lt;/code&gt;) and the embedding creation process.&lt;/li&gt;
&lt;li&gt;  This will allow us to remove the following dependencies: &lt;code&gt;chromadb&lt;/code&gt;, &lt;code&gt;onnxruntime&lt;/code&gt;, &lt;code&gt;transformers&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  The &lt;code&gt;embedding_model.py&lt;/code&gt; and &lt;code&gt;vector_db_manager.py&lt;/code&gt; files will be &lt;strong&gt;completely removed&lt;/strong&gt; from the project.&lt;/li&gt;
&lt;li&gt;  The RAG approach itself (chunking and summarization) will remain but will be &lt;strong&gt;toggleable&lt;/strong&gt;. Data will be stored directly in the session.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;New Chat and Analysis Logic:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Chat without Analysis:&lt;/strong&gt; It will be possible to launch the application and immediately start a conversation with the AI without any file context. The "Send" button will be active if an API key has been entered.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Toggleable RAG:&lt;/strong&gt; An option "Use RAG (chunking)" will be added to the settings.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;If RAG is enabled:&lt;/strong&gt; The analysis will split files into chunks and create summaries, as before. When a request is sent, the maximum possible number of these fragments will be added to the context.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;If RAG is disabled:&lt;/strong&gt; The analysis will simply read the entire files. The full texts of the files will be added to the context until the token limit is reached.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Project Context:&lt;/strong&gt; Instead of a GitHub repository, a &lt;strong&gt;local folder on the disk&lt;/strong&gt; will now be used, as in the second project. This will significantly simplify the workflow and remove the dependency on the GitHub API.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;New UI and UX Elements:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Projects Panel:&lt;/strong&gt; A panel with a list of recent sessions (projects) will appear on the left side of the main window. Clicking on a list item will load the corresponding session.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;State Persistence:&lt;/strong&gt; The application will remember its state (window size and position, panel states, last used directory) between launches using &lt;code&gt;QSettings&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Refactoring and Adaptation:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;ChatModel:&lt;/strong&gt; Will be significantly refactored. &lt;code&gt;GitHubManager&lt;/code&gt; and &lt;code&gt;VectorDBManager&lt;/code&gt; will be removed. New logic will be added for storing the context (chunks or full files) in memory/session and for assembling it before sending it to the API.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;ChatViewModel:&lt;/strong&gt; Will be adapted to the new logic of &lt;code&gt;ChatModel&lt;/code&gt;. The button activation logic (&lt;code&gt;canSend&lt;/code&gt;) will be changed.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Summarizer:&lt;/strong&gt; The worker will no longer interact with the vector DB. It will return chunks and summaries directly to the &lt;code&gt;ChatModel&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;UI (main.py):&lt;/strong&gt; A side panel for projects and a new checkbox for RAG will be added. The folder selection logic will replace the repository URL input logic.This is a major update that will affect almost all project files.
This is a very interesting and sound direction for development. Everything is ready to start.
Should I provide the code or not?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Me:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;No, the repository analysis cannot be removed. That's why I gave you the first project! All GUI elements from the first project must remain.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  AI:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Understood. I apologize for the misunderstanding. I misinterpreted the task, assuming you wanted to switch to local files. It's clear now: the foundation is &lt;strong&gt;Project #1 (GitGemini Pro)&lt;/strong&gt;; its GUI and logic for working with GitHub repositories must be preserved. At the same time, we are getting rid of vector databases and adding new functionality.&lt;br&gt;
Thank you for the clarification. This makes the task even more interesting. Let's reformulate the plan, keeping all the best parts of the first project.&lt;/p&gt;
&lt;h3&gt;
  
  
  Adjusted Plan of Changes:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Preserving the GitGemini Pro Core:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GUI:&lt;/strong&gt; The interface will be fully preserved: repository URL field, branch selection, "Analyze" button, summary window, etc.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;GitHub Logic:&lt;/strong&gt; &lt;code&gt;GitHubManager&lt;/code&gt; will remain. We will continue to work with repositories via URL.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Removing the Vector Database (as requested):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  We will &lt;strong&gt;completely remove&lt;/strong&gt; &lt;code&gt;chromadb&lt;/code&gt;, &lt;code&gt;onnxruntime&lt;/code&gt;, &lt;code&gt;transformers&lt;/code&gt;, and their associated files &lt;code&gt;embedding_model.py&lt;/code&gt; and &lt;code&gt;vector_db_manager.py&lt;/code&gt;. This will solve the compilation issue.&lt;/li&gt;
&lt;li&gt;  Instead of semantic search, the context will be gathered based on direct content.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Reworking Analysis and Context:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Chat without Analysis:&lt;/strong&gt; As you wanted, the chat will be available immediately after launch. The "Send" button will be active if an API key is entered. At this stage, the chat will function as a regular assistant with no knowledge of any specific project.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Button-Triggered Analysis:&lt;/strong&gt; The "Analyze Repository" button will initiate the context gathering process.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;New option "Use RAG (chunks)":&lt;/strong&gt; A checkbox will appear in the settings.

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;If the option is ENABLED (RAG mode):&lt;/strong&gt; The analysis will, as before, split files into meaningful chunks (&lt;code&gt;TreeSitterSplitter&lt;/code&gt;) and generate a summary. All of this text context (chunks and summary) will be saved directly to the session file (&lt;code&gt;.gpcs&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;If the option is DISABLED (full file mode):&lt;/strong&gt; The analysis will simply read the full texts of files with matching extensions. No splitting. The full content of the files will be saved to the session.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Prompt Assembly:&lt;/strong&gt; Before sending a request, &lt;code&gt;ChatModel&lt;/code&gt; will take either chunks/summaries or the full texts of files from the saved context and add them to the prompt until the token limit is reached.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;New UI and UX Elements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;"Projects" Panel:&lt;/strong&gt; A panel will appear on the left with a list of recently opened sessions (repositories). Clicking on an item will load the corresponding session.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Saving Settings:&lt;/strong&gt; The application will use &lt;code&gt;QSettings&lt;/code&gt; to remember the window size/position, panel states, and the list of recent projects.
### Brief on Files:*   &lt;strong&gt;REMOVED:&lt;/strong&gt; &lt;code&gt;embedding_model.py&lt;/code&gt;, &lt;code&gt;vector_db_manager.py&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;main.py&lt;/code&gt;:&lt;/strong&gt; Adds a left panel for projects (&lt;code&gt;QListWidget&lt;/code&gt;) and a "Use RAG" checkbox to the settings. Implements window state saving via &lt;code&gt;QSettings&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;chat_model.py&lt;/code&gt;:&lt;/strong&gt; Complete overhaul.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Removes &lt;code&gt;VectorDBManager&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  Adds a new field to store the context (e.g., &lt;code&gt;self._project_context: Dict[str, Union[str, List[str]]&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;start_repository_analysis&lt;/code&gt; will save the context to this field.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;_build_final_prompt&lt;/code&gt; will be completely rewritten to build the context from &lt;code&gt;_project_context&lt;/code&gt; instead of the vector DB.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;db_manager.py&lt;/code&gt;:&lt;/strong&gt; The session DB schema will be changed. Instead of the vector DB path, a field (likely &lt;code&gt;TEXT&lt;/code&gt; with JSON) will be added to store the entire text context (chunks/summaries or full files).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;code&gt;summarizer.py&lt;/code&gt;:&lt;/strong&gt; The worker will no longer interact with the DB. It will return chunks/summaries directly to &lt;code&gt;ChatModel&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Other files&lt;/strong&gt; (&lt;code&gt;chat_viewmodel.py&lt;/code&gt;, &lt;code&gt;chat_view.py&lt;/code&gt;, etc.) will be adapted to the new logic.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This plan preserves the full power of repository analysis, but makes the application simpler, more reliable, and fully compliant with your requirements.&lt;/p&gt;

&lt;p&gt;I am ready to start.&lt;/p&gt;

&lt;p&gt;Provide the code or not?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Me:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;do you understand that I want the ability to analyze both files on the computer, like in the second project, and repositories? Why will the Worker no longer deal with the DB? It will return chunks/summaries directly to ChatModel. Does that mean they won't be saved to the DB?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  AI:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Yes, now I understand completely. My apologies for the previous inaccuracies. You are absolutely right, and I missed two key points. Thank you for correcting me. Your vision is much more powerful and flexible.&lt;br&gt;
Let's formulate the &lt;strong&gt;final, correct plan&lt;/strong&gt; that takes all your requirements into account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Unified Architecture for TWO Project Types:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Interface:&lt;/strong&gt; We will change the UI to let the user choose what they are working with. The best solution would be a &lt;strong&gt;tab (QTabWidget)&lt;/strong&gt; or a switch:

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Tab 1: "GitHub Repository"&lt;/strong&gt;. This will have fields for the URL and branch selection, as in the first project.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Tab 2: "Local Folder"&lt;/strong&gt;. This will have a "Select Folder" button and a field to display the path, as in the second project.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Model (&lt;code&gt;ChatModel&lt;/code&gt;):&lt;/strong&gt; It will store the current project type (&lt;code&gt;'github'&lt;/code&gt; or &lt;code&gt;'local'&lt;/code&gt;) and its corresponding data (URL/branch or local path).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Analysis:&lt;/strong&gt; The analysis process will adapt. Before starting, it will check the project type and use either &lt;code&gt;GitHubManager&lt;/code&gt; to download files or &lt;code&gt;os.walk&lt;/code&gt; to read from the disk.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Refined Data Handling (Worker -&amp;gt; Model -&amp;gt; Database):&lt;/strong&gt;&lt;br&gt;
You are absolutely right to ask about saving. My previous wording was imprecise. Here is how it will actually work, and &lt;strong&gt;chunks/summaries WILL be saved to the DB&lt;/strong&gt;:*   &lt;strong&gt;&lt;code&gt;SummarizerWorker&lt;/code&gt; (Worker):&lt;/strong&gt; Its only job is processing. It reads a file (from GitHub or disk), splits it into chunks/creates a summary, and &lt;strong&gt;emits a signal&lt;/strong&gt; (&lt;code&gt;emit&lt;/code&gt;) with the prepared text data (e.g., &lt;code&gt;documents_ready(file_path, chunks, summary)&lt;/code&gt;). It &lt;strong&gt;is not aware&lt;/strong&gt; of the database's existence. This keeps the code cleaner (single-responsibility principle).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;ChatModel&lt;/code&gt; (Model):&lt;/strong&gt; It &lt;strong&gt;receives the signal&lt;/strong&gt; from the Worker. Upon receiving the data, it updates its internal state (e.g., the &lt;code&gt;self._file_chunks&lt;/code&gt; and &lt;code&gt;self._file_summaries&lt;/code&gt; dictionaries). It is the &lt;code&gt;ChatModel&lt;/code&gt; that decides what to do with this data next.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;DBManager&lt;/code&gt; and Session File:&lt;/strong&gt; After updating its state, the &lt;code&gt;ChatModel&lt;/code&gt; calls the &lt;code&gt;DBManager&lt;/code&gt; to &lt;strong&gt;save this data&lt;/strong&gt; to the session file (&lt;code&gt;.gpcs&lt;/code&gt;). We will modify the schema of this SQLite database so it can store chunks and summaries. For example, we will add a &lt;code&gt;context_data&lt;/code&gt; table with the fields &lt;code&gt;file_path&lt;/code&gt;, &lt;code&gt;type&lt;/code&gt; (&lt;code&gt;chunk&lt;/code&gt;/&lt;code&gt;summary&lt;/code&gt;), and &lt;code&gt;content&lt;/code&gt;.
&lt;strong&gt;Workflow:&lt;/strong&gt;
&lt;code&gt;SummarizerWorker&lt;/code&gt; -&amp;gt; (signal with data) -&amp;gt; &lt;code&gt;ChatModel&lt;/code&gt; -&amp;gt; (save command) -&amp;gt; &lt;code&gt;DBManager&lt;/code&gt; -&amp;gt; Session File (&lt;code&gt;.gpcs&lt;/code&gt;)
This way, &lt;strong&gt;the analysis data is saved reliably&lt;/strong&gt;, but the logic for processing and saving it is separated between components, which is a good practice.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Preserving Other Improvements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Chat without analysis:&lt;/strong&gt; Fully supported. You can simply chat with the AI without selecting a project.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Toggleable RAG:&lt;/strong&gt; A checkbox in the settings will determine whether the Worker splits files into chunks or simply reads them whole.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Project Panel and QSettings:&lt;/strong&gt; This functionality remains in the plan.
### File Summary (Final Version):&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;main.py&lt;/code&gt;:&lt;/strong&gt; Will get a &lt;code&gt;QTabWidget&lt;/code&gt; to select the project type. Otherwise, as planned, a project panel and &lt;code&gt;QSettings&lt;/code&gt; will be added.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;chat_model.py&lt;/code&gt;:&lt;/strong&gt; Will be aware of the project type. It will receive data from the Worker and pass it to the &lt;code&gt;DBManager&lt;/code&gt; to be saved in the session. The prompt-building logic will read the context from its own fields (which, in turn, are loaded from the session file).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;db_manager.py&lt;/code&gt;:&lt;/strong&gt; The session schema will be extended to store chunks and summaries (or full file texts).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;github_manager.py&lt;/code&gt;:&lt;/strong&gt; &lt;strong&gt;Remains in the project&lt;/strong&gt; and is used for working with repositories.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;summarizer.py&lt;/code&gt;:&lt;/strong&gt; Will work as a "pure" processor, emitting signals with the results.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;&lt;code&gt;embedding_model.py&lt;/code&gt;, &lt;code&gt;vector_db_manager.py&lt;/code&gt;:&lt;/strong&gt; &lt;strong&gt;Will be removed&lt;/strong&gt;, as agreed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The plan now fully meets your requirements. We are keeping the best parts of the first project, adding the flexibility of the second, and implementing new features for convenience.&lt;/p&gt;

&lt;p&gt;I'm ready. Should I provide the code or not?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  Me:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;This will be a new project. Come up with a name.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  AI:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Excellent decision! A new project deserves a new name that reflects its expanded capabilities.&lt;br&gt;
Here are a few name options, ranging from more technical to more creative, with explanations.&lt;/p&gt;
&lt;h3&gt;
  
  
  Options Emphasizing Intelligence and Code:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CodePilotAI&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Why:&lt;/strong&gt; The familiar word "Pilot" (like in GitHub Copilot) evokes associations with coding assistance. "AI" directly points to artificial intelligence. Simple, clear, and professional.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ContextCraft&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Why:&lt;/strong&gt; "Context" is a key word, as the program expertly works with context. "Craft" suggests high-quality, fine-tuned work with code.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Synapse Coder&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Why:&lt;/strong&gt; A "synapse" is a connection between neurons. The name creates an image of a neural network that establishes "connections" with your code, understanding it deeply. It sounds high-tech.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;GenieCode&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Why:&lt;/strong&gt; A play on words. "Genie" — like a magical assistant fulfilling coding wishes. It also sounds similar to "Gemini". Short and memorable.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Options Emphasizing Dialogue and Analysis:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DialogueDrive&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Why:&lt;/strong&gt; "Dialogue" is the foundation of interaction. "Drive"—like "Google Drive"—alludes to a repository for your projects and dialogues, as well as being a "driving force" in development.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;RepoReader AI&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Why:&lt;/strong&gt; Straightforward and clear. "Repo" (repository) and "Reader" describe the main function. "AI" adds a modern touch. Suitable if the main focus is on GitHub.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SourceScribe&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Why:&lt;/strong&gt; "Source" (source code). "Scribe"—someone who understands, documents, and works with texts. Elegant and professional.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  My Favorite:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;CodePilotAI&lt;/strong&gt;&lt;br&gt;
I think this name most accurately and modernly reflects the project's essence: it's your &lt;strong&gt;personal AI-powered copilot&lt;/strong&gt; for navigating the codebase. It's easy to remember, sounds professional, and gives the user a correct understanding of the program's capabilities.&lt;br&gt;
Which name do you like best? Once you choose, I will use it in comments, class names, and window titles where appropriate.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ibb.co/tTCMLmDN" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.ibb.co%2FG4PvHxR1%2F01.jpg" alt="01"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ibb.co/5x5X52mh" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.ibb.co%2Fpr363ncj%2F02.jpg" alt="02"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ibb.co/GQPwGc6r" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.ibb.co%2Fjk4KqgNj%2F03.jpg" alt="03"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ibb.co/VWz7rBY1" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.ibb.co%2FpvMtGXj7%2F04.jpg" alt="04"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ibb.co/s9PgK6wK" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.ibb.co%2Fxqs21fj1%2F05.jpg" alt="05"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ibb.co/hxBC7Z64" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.ibb.co%2FY4cbRyC1%2F06.jpg" alt="06"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ibb.co/9jHQ8Wj" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.ibb.co%2FjFkn6DF%2F07.jpg" alt="07"&gt;&lt;/a&gt;&lt;br&gt;
And that's how the application got its name.&lt;/p&gt;

&lt;p&gt;After this "briefing," the development process kicked off. I won't show the entire process with screenshots here. Sometimes, conversations in Google AI Studio fail to save, so I can't reproduce the full context.&lt;/p&gt;

&lt;p&gt;The development process goes like this:&lt;/p&gt;

&lt;p&gt;I tell the AI in natural language what functionality I need. It provides a detailed plan outlining how everything will work and then writes the code. I copy the code and paste it into my IDE. If any errors pop up when I run the application, I copy the error message and paste it back into the prompt.&lt;br&gt;

  English translation.
  &lt;h2&gt;
  
  
  Me:
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS D:\Projects\CodePilotAI&amp;gt; &amp;amp; d:/Projects/CodePilotAI/.venv/Scripts/python.exe d:/Projects/CodePilotAI/main.py
d:\Projects\CodePilotAI\main.py:665: DeprecationWarning: Enum value 'Qt::ApplicationAttribute.AA_EnableHighDpiScaling' is marked as deprecated, please check the documentation for more information.
  QApplication.setAttribute(Qt.ApplicationAttribute.AA_EnableHighDpiScaling)
2025-07-06 22:27:47,835 - root - INFO - Logging system configured. Logs are written to logs\codepilotai_2025-07-06.log
2025-07-06 22:27:47,836 - __main__ - INFO - --- Starting CodePilotAI ---
2025-07-06 22:27:47,846 - chat_model - INFO - GeminiModelLoaderWorker: Initializing model 'gemini-1.5-flash-latest'...
2025-07-06 22:27:47,847 - chat_model - INFO - GeminiModelsListWorker: Getting model list...
2025-07-06 22:27:47,855 - chat_model - INFO - GeminiModelLoaderWorker: Model 'gemini-1.5-flash-latest' successfully initialized.
2025-07-06 22:27:47,853 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): api.github.com:443
2025-07-06 22:27:48,673 - urllib3.connectionpool - DEBUG - https://api.github.com:443 "GET /rate_limit HTTP/1.1" 200 None
2025-07-06 22:27:48,675 - github_manager - INFO - GitHubManager initialized successfully. Request limit: 5000/5000 remaining
2025-07-06 22:27:48,680 - chat_viewmodel - INFO - --- ViewModel: Loading/updating state from Model ---
2025-07-06 22:27:49,117 - chat_view - DEBUG - ChatView: Connecting signals from ViewModel.
2025-07-06 22:27:49,123 - chat_view - DEBUG - ChatView: Loading HTML from file:///D:/Projects/CodePilotAI/chat_template.html
2025-07-06 22:27:49,123 - chat_model - INFO - GeminiModelsListWorker: Model list (42) successfully received.
QObject: shared QObject was deleted directly. The program is malformed and may crash.
PS D:\Projects\CodePilotAI&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  AI:
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Understood. The &lt;code&gt;QObject: shared QObject was deleted directly&lt;/code&gt; error occurs due to incorrect lifecycle management of objects that have been moved to a thread (&lt;code&gt;moveToThread&lt;/code&gt;). Typically, this means the object was deleted manually or from a different thread, when its ownership should have belonged to the thread it was moved to.&lt;br&gt;
Previous attempts to fix this may have been overly complex or could have caused duplicate deletions.&lt;br&gt;
&lt;strong&gt;Cause of the Error:&lt;/strong&gt;&lt;br&gt;
The standard Qt pattern for working with &lt;code&gt;QThread&lt;/code&gt; and workers moved to that thread is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The worker is created in the main thread.&lt;/li&gt;
&lt;li&gt; The worker is moved to the new thread: &lt;code&gt;worker.moveToThread(thread)&lt;/code&gt;. Now the worker "lives" in this thread, and its &lt;code&gt;deleteLater()&lt;/code&gt; call must be processed by this thread's event loop.&lt;/li&gt;
&lt;li&gt; When the worker finishes its job, it emits a &lt;code&gt;finished&lt;/code&gt; signal.&lt;/li&gt;
&lt;li&gt; This &lt;code&gt;finished&lt;/code&gt; signal is used to:

&lt;ul&gt;
&lt;li&gt;  Ask its thread to quit: &lt;code&gt;worker.finished.connect(thread.quit)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;  Queue itself for deletion: &lt;code&gt;worker.finished.connect(worker.deleteLater)&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; When the thread finishes its event loop (after &lt;code&gt;quit()&lt;/code&gt; is called), it emits its own &lt;code&gt;finished&lt;/code&gt; signal.&lt;/li&gt;
&lt;li&gt; This &lt;code&gt;finished&lt;/code&gt; signal is used to queue the thread object itself for deletion: &lt;code&gt;thread.finished.connect(thread.deleteLater)&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your previous implementation might have been calling &lt;code&gt;deleteLater()&lt;/code&gt; multiple times or in the wrong order, which led to the error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proposed Fixes:&lt;/strong&gt;&lt;br&gt;
I will standardize the lifecycle management of the &lt;code&gt;GeminiModelLoaderWorker&lt;/code&gt; and &lt;code&gt;GeminiModelsListWorker&lt;/code&gt; threads and workers in &lt;code&gt;chat_model.py&lt;/code&gt; according to the recommended Qt pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Signal Cleanup:&lt;/strong&gt; Before a new thread is started, all previous signal connections from the old worker and thread will be forcibly disconnected. This will prevent slots from being called on objects that are inactive or pending deletion.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Standard &lt;code&gt;deleteLater()&lt;/code&gt; Connections:&lt;/strong&gt; &lt;code&gt;worker.deleteLater()&lt;/code&gt; will be called when the worker's job is done, and &lt;code&gt;thread.deleteLater()&lt;/code&gt; will be called when the thread itself finishes.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Nullifying Python References:&lt;/strong&gt; The Python references to &lt;code&gt;_gemini_loader_thread&lt;/code&gt;, &lt;code&gt;_gemini_loader_worker&lt;/code&gt;, etc., will be set to &lt;code&gt;None&lt;/code&gt; immediately after the thread is told to quit (&lt;code&gt;quit()&lt;/code&gt;). This is safe because the actual deletion of the C++ objects happens asynchronously via &lt;code&gt;deleteLater()&lt;/code&gt;, and nullifying the references prevents any further attempts to access these soon-to-be-deleted objects from Python.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This should resolve the &lt;code&gt;QObject: shared QObject was deleted directly&lt;/code&gt; issue.&lt;/p&gt;

&lt;p&gt;Should I provide the full code, or just the corrected fragments?&lt;/p&gt;
&lt;/blockquote&gt;



&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ibb.co/3yrQ5jS8" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.ibb.co%2FQFbt7TYq%2F08.jpg" alt="08"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ibb.co/G3VbzNVm" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.ibb.co%2Fk6hWNvhZ%2F09.jpg" alt="09"&gt;&lt;/a&gt;&lt;br&gt;
The AI explains why it happened and offers a solution. And so on.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;The final result is a working program that does everything I had in mind.&lt;br&gt;
&lt;a href="https://ibb.co/xtVBkhBq" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.ibb.co%2FLX4cFScd%2FScreenshot-33.png" alt="Screenshot-33"&gt;&lt;/a&gt;&lt;br&gt;
With it, a user can have an open-ended conversation with the AI without analyzing any files. It can analyze GitHub repositories or local project folders and create a project from scratch, saving the code as files in the project directory. When you modify the code directly within the application, you can see what's been changed. Conversations can be saved and exported to HTML/Markdown. I tried to pack in every useful feature I could think of.&lt;/p&gt;

&lt;p&gt;There are 3 analysis modes&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Files are analyzed in their entirety, and the AI sees their full content. This is suitable for small projects.&lt;/li&gt;
&lt;li&gt;Files are broken down into chunks, and a summary is created for each file. All of this is attached to the user's prompt. The AI gets a more structural view of the project. This is suitable for small and medium-sized projects.&lt;/li&gt;
&lt;li&gt;Files are broken into chunks, and summaries are also created. When a user makes a request, the program finds the most relevant chunks and summaries and attaches them to the query.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In all three modes, a dependency map of functions and classes is created. The AI can see what interacts with what across different files.&lt;/p&gt;

&lt;p&gt;The program works using the Gemini API. Essentially, I adapted the capabilities of Google AI Studio for my own needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  For those who want to start but are hesitant
&lt;/h2&gt;

&lt;p&gt;So, why am I writing all this? It occurred to me that there are many people like me who want to create something—a program for themselves, or maybe for others. This is what prompted me to share my experience.&lt;/p&gt;

&lt;p&gt;I'm not great with syntax. I believe that AI will take programmers' work to a whole new level. You can confidently delegate routine tasks to an AI. It can easily write a project's boilerplate and help with planning the structure and logic of projects. All of this significantly speeds up the development process.&lt;/p&gt;

&lt;p&gt;Maybe this application will help some professional in their work, too.&lt;/p&gt;

&lt;p&gt;I still don't know Python syntax. But I created a product. It's likely that soon, one of a developer's most essential skills will be the ability to ask the right questions.&lt;/p&gt;

&lt;p&gt;That's all.&lt;/p&gt;

&lt;p&gt;If you're interested, you can learn more about the program and its implementation on my &lt;a href="https://github.com/kobaltgit/CodePilotAI" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The code is open source, and I'd be happy if my experience helps someone start their own journey.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://htmlpreview.github.io/?https://gist.githubusercontent.com/kobaltgit/9ddcbf51cee9540312a054ec7dd11559/raw/224d3ceff92a19e0f263c801d4d68eecf257194a/exported_dialogue_en.html" rel="noopener noreferrer"&gt;An example dialogue with the AI in the application&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=qyzxdFMfoLw" rel="noopener noreferrer"&gt;Program demo video&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
