This is a submission for the Built with Google Gemini: Writing Challenge
What I Built with Google Gemini
I first worked with Google Gemini while building my Stock Price Predictor, a project that used LSTM networks to forecast stock trends. I wanted the process of entering company data to feel simple, so I used Gemini to interpret user input. It could take a phrase like “Apple,” “Tesla shares,” or a direct ticker like “AAPL” and understand what the user meant to return the corresponding ticker for the company. It handled ambiguous inputs much better than regular parsing methods, and it made the tool more intuitive for people to use and really impressed me as this sort of functionality is very hard to implement with traditional methods.
After that, I extended Gemini’s role in an ETL pipeline where it helped correlate company financial data from CSV files with text-based “news” reports. Gemini was used to analyse this data and helped estimate confidence intervals for whether companies were likely to grow. It tied together data and context in a way that made the analysis both clearer and faster to update and was a very easy way to both analyse structured financial data but also news reports which before would require not very reliable NLP methods.
Demo
You can find the stock predictor project here:
sudo-gg
/
LSTMStockPredictor
In the name really
Its in the name really. For the Data Hackathon by MLH. Ironically we dont use LSTM and opted for a simpler model, namely RNN.
To use this you need: -your own mongoDB connection by making your own .env with your mongodb connection string and then uncomment the region where it creates your db with the details you want (remove this when done to stop duplicates) -Gemini api key
(The ETL pipeline project isn’t publicly hosted, but it built on similar principles with heavier use of financial text analysis.)
What I Learned
I learned a lot about combining data processing and text interpretation. Getting both numerical and textual data to work smoothly together took more trial and error than I expected. It also taught me how to design cleaner code that lets AI tools assist without changing the core model logic.
There were a few real challenges. I had to deal with Gemini’s API rate limits, manage privacy when working with data, and handle large datasets that sometimes behaved unpredictably.
It really improved my workflow a lot. I often switch between programming languages and remembering syntax can slow me down. Using Gemini made it easier to stay productive and focus on the main structure of the code instead of small details.
Furthermore in learning new programming languages, Gemini was unmatched to traditional learning methods as it allows learning syntax and doing a project simultaneously!!
I realised using these tools I didn't want to get stuck behind realising rejecting AI is comparable to my parents rejecting their devices for books! (Not that I think computers can replace the experience of a real book but my point still stands 😄)
Since then and now I use Gemini almost every day not just for coding but in doing any monotonous task to allow me to spend my time doing more meaningful things.
Google Gemini Feedback
Gemini integrated well with the rest of my setup, and I found it effective at handling imperfect or unclear text. Its general understanding of context made it great for early data interpretation or cleaning tasks.
Some aspects could still be improved. When handling long text, Gemini sometimes missed finer details or repeated itself. In multi-step workflows, keeping context consistent between calls sometimes broke down, and larger tasks could get expensive fast, this was especially acute in working with larger code bases although luckily as someone who is not a professional software engineer and just a humble math student this didn't affect me as much.
Even with these limitations, the experience was genuinely positive. It helped me work faster and think differently about how to connect language and data. My next step is to experiment with similar techniques for parsing cybersecurity event logs, where text interpretation and data structure again play a key role and hopefully use the Raspberry Pi GenAI kit to utilise AI for physical tasks, for instance I am wanting to test how AI could be used with computer vision to make a good home made security system which could alert my devices!
Thank you for considering my submission!
Top comments (0)