Five months.
Five months of relentless learning, five months of coding and debugging, and five months of learning a completely different language just for five days of added frustration...
Following is my journey to building my first ever professional code, TLDR: This blog is going to be very long, pace it well.
Let me go back to where it began:
July 2025:
It was an ordinary summer day with the temperatures consistently reaching 35 degrees (Celsius). Me and my friend were discussing adding a chatbot to our then recent project, which would help users navigate the interface and could act as a mini helper of sorts.
Following up this discussion, I actually started looking up how I could make a responsive chatbot without any prior knowledge.
(This was following a 'loss' on a hackathon given somewhat of an overdependency on LLMs when learning a new package, this had me reconsidering my approach to LLM assisted learning.)
During my search for such a piece, I came across multiple python libraries, complete packages, and even sometimes pre-existing code to copy and paste.
Again, this was supposed to be a self-imposed learning challenge. As I did not have any experience with how chatbot work or interact with other tech, I was flying blind.
I ended up choosing the following for consideration when I began building a model:
- LangChain
- LlamaIndex
- RASA NLU
- spaCy
- SciSpacy I also considered BERT's like BioBERT, PubMedBERT, etc.
Now given my inexperience and my urge to learn something which points me in the right direction, I got stuck between using either BioBERT, PubMedBERT or using RASA NLU to create my own. I stuck with RASA...
The problem with this approach for me then was that I actually needed to learn how the NLU worked starting from the syntax.
YML
I have no Idea what YML actually stands for though.
YML is a programming language which works in some syntactical sense to python.
Unlike python which requires four spaces for an indentation, YML actually works using '-' and either one or two spaces depending on nesting.
nlu:
- intent: greet
examples: |
- hey
- hello
- hi
Though it looks complicated YML is probably the easiest language I have picked up even against HTML. As you can see in the block shown above, it is that easy to form exemplary pairs for models to use in conversations. We under Natural Language Understanding are defining an intent 'greet' and correlating it to examples like 'hey', 'hello' and 'hi'.
For those unaware, what exactly is an Intent?
Unlike your large scale LLM which require a text block which then is used for model training, RASA functions quite differently.
RASA's NLU module handles things like intents, entities, sentence parsing and information conversion.
(Oversimplification)
Basically, an intent can be seen as a class with defining characteristics. So, every time any message like 'hey' is sent, it is consistently flagged by the processor to be of type: greet aka, the user is greeting you!
Now to respond to this, unlike LLMs where we would have a trained bot on information which covers what to say when, RASA just lets you say any response you might want the bot to speak. This is done under utterances like:
utter_greet:
- text: "Hey! How are you?"
- text: "Hey! How are you doing today?"
- text: "Hi there! What can I do for you?"
This allows you to have complete autonomy over what the response should be or should not be.
Finally, how do you connect the dots, i.e. tell the bot that
for - intent: greet -> utter_greet ?
That is where stories come in. Stories are basically paths that a bot can expect during a conversation.
- story: greetings
steps:
- intent: greet
- action: utter_greet
This tells the bot that if a user provides the intent 'greet' your response is to provide the responses from utter_greet. This is a particularly great feature as it allows you to build routes which go way deep controlling every singular case or keep it open ended and just work based on overlapping and controlled fallbacks.
This helped me build my first ever chatbot, one that actually turned out to be genuinely useful as both an information tool and a user guide. When you start incorporating actions*, it becomes one of the most beginner-friendly ways to create a functional chatbot.
*This isn’t a Rasa-focused blog, though, so I’ll move on.
August 2025:
When our studies resumed, we were told to “build something for the community.” The problem inherent in that statement was the sheer vagueness of it, and the fact that almost everything imaginable has already been done, either independently or via some coding agent. So, I took some time, went around my community, asked what people felt was lacking, and a query of quite some importance came into view.
In a time where people are using AI to replace almost everything, clinics remain strangely untouched. The same pattern held true even in larger hospitals. And yet, these are the exact places where AI would be incredibly useful, especially for reducing the endless piles of paperwork that eat up so much time and attention. Oh, seems like we found our coding idea then… though it’s not completely acceptable as one crucial problem still remains.
LLMs are quite hallucinogenic, they can provide incorrect answers to even the simplest medical questions if the prompt isn’t grounded properly. This issue becomes even more significant when you consider the existing distrust professionals already have toward AI, given its current reputation with the general public.
So, this required a very specific type of model, one whose outputs were reliable in the sense that they could be chosen or constrained by the programmer, hopefully. And beyond that, the degree of control, the coverage of responses, and the predictability of those responses were also major concerns. This led to a peculiar problem: if only we knew the solution…
Oh wait, don’t we already?
So, for the next, nearly three months from August to November I was working on this:
Introducing CLIN-BOT/Aarogya AI
Inspired from my little epiphany about our findings and having studies the required language just a months before, I got to work. For our little endeavor here, I took on the job of creating the backend model for the bot responses while also working on the database required to hold login credentials, with other members also picking up work and working on front-end, UI, UX as well as helping me with model builds.
Working on this project wouldn't have been possible all alone, check out the remaining devs as well:
Front-end: Parth-LinkedIn
UI/UX testing: Subham-LinkedIn
Backend testing/Intent-response pairing: Ram-LinkedIn
All the while, I was also working on another tangential, but equally important, academic project that wrapped up only recently. So, the workload was already heavy. And with the added responsibility of being the project lead, these turned out to be some of the most stressful months I’ve had as a student.
The premise behind CLIN-BOT was to create an information bot to reduce the load of informational conversations on counsellors. But as more and more development progressed we conclusively thought that the bot was made for more, more reliable, more impactful work. That's when it clicked and we decided the perfect use case for such a piece was to work as a patient wise database which convers with the user.
It reduces the work any doctor needs to do to find the files and read up and understand the case.
All inputs can be directly provided by previous doctors, since the system can support patient-wise intent and response updates through a simple front-end UI.
For now, we stuck with attempting to help newer clinical staff by converting it into an educational but clinic oriented chatbot.
Given this motivation and the drive to build something on my own, we ended up creating the first edu-clinically oriented conversational bot directed at my immediate local community. After finishing it, we submitted the project to a global hackathon just to see whether people would find it usable. Surprisingly, we placed 9th out of 180+ projects and 300+ participants. That was a nice bonus, especially considering we were only three spots away from winning.
(Top 6 were taken as winners.)
Now that post-production is finally done, the next step is to actually see how helpful this thing is in a real clinical setting. And if you do end up using the bot, I’d really appreciate any suggestions or feedback you might have, anything that helps me make this even slightly better is more than welcome.
For all computer-based users try out the website for yourself:

For all mobile-first a demo video is available below:
In reference to the five days of frustration is an experience worthy of a blog for itself and given how long this one already is, I think this'll do for today.
Thank you for trying out the bot, if you couldn't thanks for reading this devlog/journal entry.
Until next time covering another major project.
Peace!


Top comments (0)