I didn't plan to build a sentiment analysis project.
The idea came up casually after a professor mentioned Manus AI, a tool capable of generating code from natural language descriptions. Since I was already familiar with other artificial intelligences like ChatGPT, Claude AI, and Gemini, I decided to run a comparison between them.
For this, I chose an example Python project I found on a suggestions website: a sentiment analysis app. The idea was simple — send the same prompt to all four AIs and see how each one would perform.
What was supposed to be just a quick test ended up becoming a complete experience that taught me much more than I imagined — not exactly about sentiment analysis or language models, but about documentation, best practices, and responsible use of artificial intelligence.
Building the Project with Multiple AIs
After receiving code from all four AIs, I started analyzing the differences. None of the deliveries were perfect: there were bugs, different structures, and distinct approaches to the same task.
I fixed bugs, tested snippets, and gradually built a functional version of the project based on what worked best from each result.
At this point, the project was already in a GitHub repository. And that's when I started thinking:
How can I document this process in the clearest, most ethical, and useful way possible?
Exploring Documentation as Part of the Project
I started organizing the documentation in separate files:
- A
README.md
to present the project - A
CONTRIBUTING.md
with collaboration guidelines - Project configuration files
- Extra Markdown files explaining the motivations and comparisons between AIs
That's when I discovered MkDocs, a tool that transforms .md
files into a navigable documentation website. Since I was already writing everything in Markdown, I decided to experiment — and the result surprised me.
With little effort, I managed to publish all the documentation as a website on GitHub Pages. I was proud of how this small experiment gained a professional structure.
What I Actually Learned
In the end, I learned very little about sentiment analysis — and that's okay.
What I took away the most was:
- How different AIs respond to the same technical prompt
- The importance of documenting AI usage with clarity and responsibility
- The practice of organizing project files and adopting real documentation standards
- The experience of using tools like MkDocs and Streamlit in a practical context
- The satisfaction of publishing a complete project, with deployment and everything
This project became a personal laboratory, where I explored things that truly interest me — even though that wasn't the initial plan.
Final Reflections
Before, I used to pressure myself a lot to follow schedules, meet perfect goals, and do projects "the right way." But I realized that my best way of learning is when I follow my curiosity.
That's how this project was born: from a simple test with AIs, without pressure, without rigid planning — but with freedom to explore, make mistakes, and document.
If you also feel that your projects "don't have enough value," maybe you're looking at them with the wrong ruler. Sometimes learning is more in the process than in the technical result.
And sharing this journey can help more people than you imagine.
🔗 Check out the complete project: github.com/Alan-oliveir/Sentiment_Radar_APP
💬 Did you enjoy this experience? Tell me what you think or share your own way of learning independently in the comments!
Top comments (0)