DEV Community

Cover image for Programming without programming. My experiment with AI
kobaltGIT
kobaltGIT

Posted on • Edited on

Programming without programming. My experiment with AI

Introduction

Hello everyone!

I’ve been thinking for a long time about whether to write this story.

The thing is, whether it's because my brain isn't wired for it or just plain laziness, no matter how many times I tried to learn any programming language, it never worked out. But I still wanted to create an application of my own. I diligently read books and watched tutorials on video platforms, but the most I could grasp was the general logic behind building programs. As soon as it came to syntax, it felt like my brain was about to explode. Eventually, I gave up. It seems laziness won out after all.

When neural networks started gaining momentum and information about their incredible capabilities was coming from all sides, I had an idea to try and create something with the help of artificial intelligence (which I'll call AI from now on).

Impostor Syndrome

Now, let me explain why I had doubts about writing a story on creating a program with AI. When I finished it, I had an urgent desire to share it with someone. But thoughts started popping into my head: "Who do you think you are?" "What are you even doing?" Basically, impostor syndrome started to get the better of me. I knew that people study for years to write code, and here I was, the 'prodigy', with my program.

But the program works. It performs its tasks. It does everything I intended it to do. I have no idea about the quality of the code inside. I relied entirely on AI. I could only ask the AI about the code's quality, as I don't have any programmer friends, and I didn't want to bother strangers with a request to evaluate my creation. Here's a quote from the AI when I asked it to evaluate the program:

CodePilotAI is not just a training project, but a full-fledged, mature desktop application that demonstrates an exemplary implementation of the MVVM pattern on PySide6. The system's architecture features an excellent separation of concerns: all business logic is encapsulated in the ChatModel, the interface is completely separated in the View classes, and the ChatViewModel acts as the perfect link between them. This approach ensures high maintainability and scalability for the project.
Particularly noteworthy is the competent use of multithreading for all resource-intensive operations, which guarantees a responsive interface even when analyzing large codebases. The implementation of advanced features, such as semantic code analysis using tree-sitter (Code-Graph) and a hybrid RAG with vector search, places this project on par with professional developer tools. The code quality, attention to detail, error handling, and well-thought-out user experience testify to the highest level of engineering craftsmanship.

Of course, AI is prone to flattery, but it was nice to hear that.😇

The Birth of the Program. When Existing Tools Just Don't Cut It...

Now, about the program itself. I chose to work with Google AI Studio because its models are available for free and have a very large token limit. You can upload an entire project there, and the AI will read it, find errors, and provide code with detailed explanations on what to insert and where. In a single message, it can generate code exceeding 1000 lines.

While working in this environment, I began to notice some inconveniences. After a conversation exceeded 100-150k tokens, the AI would sometimes start hallucinating, going in circles, and suggesting strange solutions. I had to start a new session, upload the files again, write prompts, and get the AI up to speed on the context—in other words, tedious, repetitive actions.

First Experience

Ultimately, that's why I decided to create an application for "programmers" like me. I needed the AI to constantly see the file contents, a way to save on tokens, and for the AI to provide the most relevant answers possible.The first application I built can read the contents of files—you can select the extensions in the app—include them in the user's prompt context, and generate high-quality responses. Conversations can be saved to be resumed later. I won't go into the development story of this app and the second one, as this article isn't about them. But you'll get the gist of how they work.

While using this program, I realized it could only handle small-scale projects. The file contents ate up too many tokens.

The Rush

Getting a taste for it, I decided to build an application for analyzing GitHub repositories. I succeeded with that as well. I even managed to integrate vector databases to store code snippets, summaries (short AI-generated descriptions of each file), user queries, and the project structure (the project tree). However, I couldn't compile it into an executable file, precisely because of the libraries needed to create and work with vector databases. It only runs in the development environment.

The Outcome

Ultimately, I realized I needed to merge these two applications into one. And that's precisely what I want to share with you, as I consider it my "crowning achievement" 😏.

The Main Tool: THE PROMPT!

Throughout my conversations with the AI, I realized that its responses are only as good as the questions you ask. Ask vague, general questions, and you'll get vague answers. The AI needs clear, structured instructions. Otherwise, getting the desired result takes way too much time. In all my sessions, I used an instruction set I came up with:

- Respond in Russian.
- Ask whether to provide the full code or only the corrected snippets.
- If I ask for the full code, provide it in its entirety—without omissions, abbreviations, modifications, ellipses, or comments in place of code snippets—and only for the files you are modifying.
- In your responses, provide one file at a time and ask: "Continue?". When I reply "Yes," provide the next file. Do this for all files.
- If you provide a snippet, give detailed instructions for insertion/replacement. Use the format: previous line > snippet to avoid misunderstandings and insertion errors.
- Explain in a detailed and structured way what you changed and why.
- If the question implies providing code, ask "Provide code or not?"
- If the question does not imply providing code, suggest the next steps, or if there are none, notify me that the cycle of operations and changes is complete.
- At appropriate times, suggest committing the changes to GitHub, specifying a commit message.
(example:
feat: Add standard Edit menu actions
- Added Undo, Redo, Cut, Copy, Paste, Select All, Find actions to the Edit menu.
- Assigned standard keyboard shortcuts (Ctrl+Z, Ctrl+Y, Ctrl+X, Ctrl+C, Ctrl+V, Ctrl+A, Ctrl+F).
- Connected actions directly to QTextEdit slots in TextViewPanel. - Implemented a basic find text functionality using QInputDialog. - Connected QTextEdit signals (copyAvailable, undoAvailable, redoAvailable) to automatically enable/disable corresponding actions.
- Updated action states based on text presence when selecting/clearing documents.
- Updated test stubs.
)
If the conversation becomes very long, suggest continuing it in a new chat session with you, explaining how to do so while preserving the context.
The code is being worked on in VS Code.

The AI sees these instructions before every one of my prompts.

I "fed" the AI the files from both projects and wrote the following query:

As the dialogue was conducted in Russian, here is the English translation.

Me:

These two projects need to be merged into one. The requirements are:

  1. The chat must be available without file and repository analysis (it's not necessary to analyze anything to communicate with the AI). Analysis is an additional feature.
  2. The RAG approach must be implemented without vector databases (just chunking and summarization) and must be toggleable. This means when the feature is disabled, the program analyzes entire files without splitting them into chunks. I am dropping vector databases because they make it impossible to compile the application into an executable file.
  3. A history of saved projects should be displayed on the left.
  4. The application must remember its settings (window state, project save location, etc.).

AI:

Great, I have reviewed both projects and fully understand the task. Your goal of combining their functionality, simplifying the RAG mechanism for better compilability, and adding new UI elements like a project history is a logical and sound step for the application's development.
I am ready to begin a large-scale code overhaul. We will use the more advanced project, GitGemini Pro (the first one you sent), as a base and implement the required changes.

General Plan for Changes:

  1. Simplifying RAG and Removing Dependencies:
    • We will completely abandon vector databases (ChromaDB) and the embedding creation process.
    • This will allow us to remove the following dependencies: chromadb, onnxruntime, transformers.
    • The embedding_model.py and vector_db_manager.py files will be completely removed from the project.
    • The RAG approach itself (chunking and summarization) will remain but will be toggleable. Data will be stored directly in the session.
  2. New Chat and Analysis Logic:
    • Chat without Analysis: It will be possible to launch the application and immediately start a conversation with the AI without any file context. The "Send" button will be active if an API key has been entered.
    • Toggleable RAG: An option "Use RAG (chunking)" will be added to the settings.
      • If RAG is enabled: The analysis will split files into chunks and create summaries, as before. When a request is sent, the maximum possible number of these fragments will be added to the context.
      • If RAG is disabled: The analysis will simply read the entire files. The full texts of the files will be added to the context until the token limit is reached.
    • Project Context: Instead of a GitHub repository, a local folder on the disk will now be used, as in the second project. This will significantly simplify the workflow and remove the dependency on the GitHub API.
  3. New UI and UX Elements:
    • Projects Panel: A panel with a list of recent sessions (projects) will appear on the left side of the main window. Clicking on a list item will load the corresponding session.
    • State Persistence: The application will remember its state (window size and position, panel states, last used directory) between launches using QSettings.
  4. Refactoring and Adaptation:
    • ChatModel: Will be significantly refactored. GitHubManager and VectorDBManager will be removed. New logic will be added for storing the context (chunks or full files) in memory/session and for assembling it before sending it to the API.
    • ChatViewModel: Will be adapted to the new logic of ChatModel. The button activation logic (canSend) will be changed.
    • Summarizer: The worker will no longer interact with the vector DB. It will return chunks and summaries directly to the ChatModel.
    • UI (main.py): A side panel for projects and a new checkbox for RAG will be added. The folder selection logic will replace the repository URL input logic.This is a major update that will affect almost all project files. This is a very interesting and sound direction for development. Everything is ready to start. Should I provide the code or not?

Me:

No, the repository analysis cannot be removed. That's why I gave you the first project! All GUI elements from the first project must remain.

AI:

Understood. I apologize for the misunderstanding. I misinterpreted the task, assuming you wanted to switch to local files. It's clear now: the foundation is Project #1 (GitGemini Pro); its GUI and logic for working with GitHub repositories must be preserved. At the same time, we are getting rid of vector databases and adding new functionality.
Thank you for the clarification. This makes the task even more interesting. Let's reformulate the plan, keeping all the best parts of the first project.

Adjusted Plan of Changes:

  1. Preserving the GitGemini Pro Core:

    • GUI: The interface will be fully preserved: repository URL field, branch selection, "Analyze" button, summary window, etc.
    • GitHub Logic: GitHubManager will remain. We will continue to work with repositories via URL.
  2. Removing the Vector Database (as requested):

    • We will completely remove chromadb, onnxruntime, transformers, and their associated files embedding_model.py and vector_db_manager.py. This will solve the compilation issue.
    • Instead of semantic search, the context will be gathered based on direct content.
  3. Reworking Analysis and Context:

    • Chat without Analysis: As you wanted, the chat will be available immediately after launch. The "Send" button will be active if an API key is entered. At this stage, the chat will function as a regular assistant with no knowledge of any specific project.
    • Button-Triggered Analysis: The "Analyze Repository" button will initiate the context gathering process.
    • New option "Use RAG (chunks)": A checkbox will appear in the settings.
      • If the option is ENABLED (RAG mode): The analysis will, as before, split files into meaningful chunks (TreeSitterSplitter) and generate a summary. All of this text context (chunks and summary) will be saved directly to the session file (.gpcs).
      • If the option is DISABLED (full file mode): The analysis will simply read the full texts of files with matching extensions. No splitting. The full content of the files will be saved to the session.
    • Prompt Assembly: Before sending a request, ChatModel will take either chunks/summaries or the full texts of files from the saved context and add them to the prompt until the token limit is reached.
  4. New UI and UX Elements:

    • "Projects" Panel: A panel will appear on the left with a list of recently opened sessions (repositories). Clicking on an item will load the corresponding session.
    • Saving Settings: The application will use QSettings to remember the window size/position, panel states, and the list of recent projects. ### Brief on Files:* REMOVED: embedding_model.py, vector_db_manager.py.
  5. main.py: Adds a left panel for projects (QListWidget) and a "Use RAG" checkbox to the settings. Implements window state saving via QSettings.

  6. chat_model.py: Complete overhaul.

    • Removes VectorDBManager.
    • Adds a new field to store the context (e.g., self._project_context: Dict[str, Union[str, List[str]]).
    • start_repository_analysis will save the context to this field.
    • _build_final_prompt will be completely rewritten to build the context from _project_context instead of the vector DB.
  7. db_manager.py: The session DB schema will be changed. Instead of the vector DB path, a field (likely TEXT with JSON) will be added to store the entire text context (chunks/summaries or full files).

  8. summarizer.py: The worker will no longer interact with the DB. It will return chunks/summaries directly to ChatModel.

  9. Other files (chat_viewmodel.py, chat_view.py, etc.) will be adapted to the new logic.

This plan preserves the full power of repository analysis, but makes the application simpler, more reliable, and fully compliant with your requirements.

I am ready to start.

Provide the code or not?

Me:

do you understand that I want the ability to analyze both files on the computer, like in the second project, and repositories? Why will the Worker no longer deal with the DB? It will return chunks/summaries directly to ChatModel. Does that mean they won't be saved to the DB?

AI:

Yes, now I understand completely. My apologies for the previous inaccuracies. You are absolutely right, and I missed two key points. Thank you for correcting me. Your vision is much more powerful and flexible.
Let's formulate the final, correct plan that takes all your requirements into account.

1. Unified Architecture for TWO Project Types:

  • Interface: We will change the UI to let the user choose what they are working with. The best solution would be a tab (QTabWidget) or a switch:
    • Tab 1: "GitHub Repository". This will have fields for the URL and branch selection, as in the first project.
    • Tab 2: "Local Folder". This will have a "Select Folder" button and a field to display the path, as in the second project.
  • Model (ChatModel): It will store the current project type ('github' or 'local') and its corresponding data (URL/branch or local path).
  • Analysis: The analysis process will adapt. Before starting, it will check the project type and use either GitHubManager to download files or os.walk to read from the disk.

2. Refined Data Handling (Worker -> Model -> Database):
You are absolutely right to ask about saving. My previous wording was imprecise. Here is how it will actually work, and chunks/summaries WILL be saved to the DB:* SummarizerWorker (Worker): Its only job is processing. It reads a file (from GitHub or disk), splits it into chunks/creates a summary, and emits a signal (emit) with the prepared text data (e.g., documents_ready(file_path, chunks, summary)). It is not aware of the database's existence. This keeps the code cleaner (single-responsibility principle).

  • ChatModel (Model): It receives the signal from the Worker. Upon receiving the data, it updates its internal state (e.g., the self._file_chunks and self._file_summaries dictionaries). It is the ChatModel that decides what to do with this data next.
  • DBManager and Session File: After updating its state, the ChatModel calls the DBManager to save this data to the session file (.gpcs). We will modify the schema of this SQLite database so it can store chunks and summaries. For example, we will add a context_data table with the fields file_path, type (chunk/summary), and content. Workflow: SummarizerWorker -> (signal with data) -> ChatModel -> (save command) -> DBManager -> Session File (.gpcs) This way, the analysis data is saved reliably, but the logic for processing and saving it is separated between components, which is a good practice.

3. Preserving Other Improvements:

  • Chat without analysis: Fully supported. You can simply chat with the AI without selecting a project.
  • Toggleable RAG: A checkbox in the settings will determine whether the Worker splits files into chunks or simply reads them whole.
  • Project Panel and QSettings: This functionality remains in the plan. ### File Summary (Final Version):
  • main.py: Will get a QTabWidget to select the project type. Otherwise, as planned, a project panel and QSettings will be added.
  • chat_model.py: Will be aware of the project type. It will receive data from the Worker and pass it to the DBManager to be saved in the session. The prompt-building logic will read the context from its own fields (which, in turn, are loaded from the session file).
  • db_manager.py: The session schema will be extended to store chunks and summaries (or full file texts).
  • github_manager.py: Remains in the project and is used for working with repositories.
  • summarizer.py: Will work as a "pure" processor, emitting signals with the results.
  • embedding_model.py, vector_db_manager.py: Will be removed, as agreed.

The plan now fully meets your requirements. We are keeping the best parts of the first project, adding the flexibility of the second, and implementing new features for convenience.

I'm ready. Should I provide the code or not?

Me:

This will be a new project. Come up with a name.

AI:

Excellent decision! A new project deserves a new name that reflects its expanded capabilities.
Here are a few name options, ranging from more technical to more creative, with explanations.

Options Emphasizing Intelligence and Code:

  1. CodePilotAI

    • Why: The familiar word "Pilot" (like in GitHub Copilot) evokes associations with coding assistance. "AI" directly points to artificial intelligence. Simple, clear, and professional.
  2. ContextCraft

    • Why: "Context" is a key word, as the program expertly works with context. "Craft" suggests high-quality, fine-tuned work with code.
  3. Synapse Coder

    • Why: A "synapse" is a connection between neurons. The name creates an image of a neural network that establishes "connections" with your code, understanding it deeply. It sounds high-tech.
  4. GenieCode

    • Why: A play on words. "Genie" — like a magical assistant fulfilling coding wishes. It also sounds similar to "Gemini". Short and memorable.

Options Emphasizing Dialogue and Analysis:

  1. DialogueDrive

    • Why: "Dialogue" is the foundation of interaction. "Drive"—like "Google Drive"—alludes to a repository for your projects and dialogues, as well as being a "driving force" in development.
  2. RepoReader AI

    • Why: Straightforward and clear. "Repo" (repository) and "Reader" describe the main function. "AI" adds a modern touch. Suitable if the main focus is on GitHub.
  3. SourceScribe

    • Why: "Source" (source code). "Scribe"—someone who understands, documents, and works with texts. Elegant and professional.

My Favorite:

CodePilotAI
I think this name most accurately and modernly reflects the project's essence: it's your personal AI-powered copilot for navigating the codebase. It's easy to remember, sounds professional, and gives the user a correct understanding of the program's capabilities.
Which name do you like best? Once you choose, I will use it in comments, class names, and window titles where appropriate.

01
02
03
04
05
06
07
And that's how the application got its name.

After this "briefing," the development process kicked off. I won't show the entire process with screenshots here. Sometimes, conversations in Google AI Studio fail to save, so I can't reproduce the full context.

The development process goes like this:

I tell the AI in natural language what functionality I need. It provides a detailed plan outlining how everything will work and then writes the code. I copy the code and paste it into my IDE. If any errors pop up when I run the application, I copy the error message and paste it back into the prompt.

English translation.

Me:

PS D:\Projects\CodePilotAI> & d:/Projects/CodePilotAI/.venv/Scripts/python.exe d:/Projects/CodePilotAI/main.py
d:\Projects\CodePilotAI\main.py:665: DeprecationWarning: Enum value 'Qt::ApplicationAttribute.AA_EnableHighDpiScaling' is marked as deprecated, please check the documentation for more information.
  QApplication.setAttribute(Qt.ApplicationAttribute.AA_EnableHighDpiScaling)
2025-07-06 22:27:47,835 - root - INFO - Logging system configured. Logs are written to logs\codepilotai_2025-07-06.log
2025-07-06 22:27:47,836 - __main__ - INFO - --- Starting CodePilotAI ---
2025-07-06 22:27:47,846 - chat_model - INFO - GeminiModelLoaderWorker: Initializing model 'gemini-1.5-flash-latest'...
2025-07-06 22:27:47,847 - chat_model - INFO - GeminiModelsListWorker: Getting model list...
2025-07-06 22:27:47,855 - chat_model - INFO - GeminiModelLoaderWorker: Model 'gemini-1.5-flash-latest' successfully initialized.
2025-07-06 22:27:47,853 - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): api.github.com:443
2025-07-06 22:27:48,673 - urllib3.connectionpool - DEBUG - https://api.github.com:443 "GET /rate_limit HTTP/1.1" 200 None
2025-07-06 22:27:48,675 - github_manager - INFO - GitHubManager initialized successfully. Request limit: 5000/5000 remaining
2025-07-06 22:27:48,680 - chat_viewmodel - INFO - --- ViewModel: Loading/updating state from Model ---
2025-07-06 22:27:49,117 - chat_view - DEBUG - ChatView: Connecting signals from ViewModel.
2025-07-06 22:27:49,123 - chat_view - DEBUG - ChatView: Loading HTML from file:///D:/Projects/CodePilotAI/chat_template.html
2025-07-06 22:27:49,123 - chat_model - INFO - GeminiModelsListWorker: Model list (42) successfully received.
QObject: shared QObject was deleted directly. The program is malformed and may crash.
PS D:\Projects\CodePilotAI> 
Enter fullscreen mode Exit fullscreen mode

AI:

Understood. The QObject: shared QObject was deleted directly error occurs due to incorrect lifecycle management of objects that have been moved to a thread (moveToThread). Typically, this means the object was deleted manually or from a different thread, when its ownership should have belonged to the thread it was moved to.
Previous attempts to fix this may have been overly complex or could have caused duplicate deletions.
Cause of the Error:
The standard Qt pattern for working with QThread and workers moved to that thread is as follows:

  1. The worker is created in the main thread.
  2. The worker is moved to the new thread: worker.moveToThread(thread). Now the worker "lives" in this thread, and its deleteLater() call must be processed by this thread's event loop.
  3. When the worker finishes its job, it emits a finished signal.
  4. This finished signal is used to:
    • Ask its thread to quit: worker.finished.connect(thread.quit).
    • Queue itself for deletion: worker.finished.connect(worker.deleteLater).
  5. When the thread finishes its event loop (after quit() is called), it emits its own finished signal.
  6. This finished signal is used to queue the thread object itself for deletion: thread.finished.connect(thread.deleteLater).

Your previous implementation might have been calling deleteLater() multiple times or in the wrong order, which led to the error.

Proposed Fixes:
I will standardize the lifecycle management of the GeminiModelLoaderWorker and GeminiModelsListWorker threads and workers in chat_model.py according to the recommended Qt pattern:

  1. Signal Cleanup: Before a new thread is started, all previous signal connections from the old worker and thread will be forcibly disconnected. This will prevent slots from being called on objects that are inactive or pending deletion.
  2. Standard deleteLater() Connections: worker.deleteLater() will be called when the worker's job is done, and thread.deleteLater() will be called when the thread itself finishes.
  3. Nullifying Python References: The Python references to _gemini_loader_thread, _gemini_loader_worker, etc., will be set to None immediately after the thread is told to quit (quit()). This is safe because the actual deletion of the C++ objects happens asynchronously via deleteLater(), and nullifying the references prevents any further attempts to access these soon-to-be-deleted objects from Python.

This should resolve the QObject: shared QObject was deleted directly issue.

Should I provide the full code, or just the corrected fragments?

08
09
The AI explains why it happened and offers a solution. And so on.

The Result

The final result is a working program that does everything I had in mind.
Screenshot-33
With it, a user can have an open-ended conversation with the AI without analyzing any files. It can analyze GitHub repositories or local project folders and create a project from scratch, saving the code as files in the project directory. When you modify the code directly within the application, you can see what's been changed. Conversations can be saved and exported to HTML/Markdown. I tried to pack in every useful feature I could think of.

There are 3 analysis modes

  1. Files are analyzed in their entirety, and the AI sees their full content. This is suitable for small projects.
  2. Files are broken down into chunks, and a summary is created for each file. All of this is attached to the user's prompt. The AI gets a more structural view of the project. This is suitable for small and medium-sized projects.
  3. Files are broken into chunks, and summaries are also created. When a user makes a request, the program finds the most relevant chunks and summaries and attaches them to the query.

In all three modes, a dependency map of functions and classes is created. The AI can see what interacts with what across different files.

The program works using the Gemini API. Essentially, I adapted the capabilities of Google AI Studio for my own needs.

For those who want to start but are hesitant

So, why am I writing all this? It occurred to me that there are many people like me who want to create something—a program for themselves, or maybe for others. This is what prompted me to share my experience.

I'm not great with syntax. I believe that AI will take programmers' work to a whole new level. You can confidently delegate routine tasks to an AI. It can easily write a project's boilerplate and help with planning the structure and logic of projects. All of this significantly speeds up the development process.

Maybe this application will help some professional in their work, too.

I still don't know Python syntax. But I created a product. It's likely that soon, one of a developer's most essential skills will be the ability to ask the right questions.

That's all.

If you're interested, you can learn more about the program and its implementation on my GitHub.

The code is open source, and I'd be happy if my experience helps someone start their own journey.

An example dialogue with the AI in the application
Program demo video

Top comments (0)