DEV Community

jimamuto
jimamuto

Posted on

MY JOURNEY AS A DEVELOPER

My Developer Journey: From Dependency Hell to Developing Production-Grade Applications

Level 1: Lunartech

The Beginning: The Experimentation Mindset

Hello readers, my name is Jim Amuto, and this is my story. One thing about me is that I love experimenting. In every project I do, I don't follow the norm. I ask myself: What am I doing? What is the end goal? How much faster can I ship it to actual users or increase my workflow speed?

That's how I developed a passion for software development, mostly driven by my love for DevOps. I am grateful for the position I am currently in because I was able to identify loopholes and the need to ship code faster.


The Challenge: Full Stack in an AI Startup

I remember being on a deadline with so many things to do. In the nature of an AI startup, you are required to perform full-stack roles rather than being specialized, which is good because you become a jack of all trades and then personally specialize in one area as you grow.


Shipping Code Under Pressure

I thought quickly about how I could ensure I was capable of shipping code. My current big project involved building an AI translator that was supposed to handle long hours of content, which my boss insisted on. It was quite difficult to implement because my device was a bottleneck. AI/ML projects are very computationally intensive, but again, no excuses. I tried and tried again until it failed.

So I reached out to my friend who had a better device, apparently. We had a call, he cloned the repo, and I told him what to install first. Then he ran a dependency install script, and hell broke loose—literally "dependency hell." He fixed what he could and continued with what I requested, but it still failed because there were so many specific versions to install. I eventually gave up and told my boss that my PC wasn't enough.

I did a lot of research on cloud computing and saw many options that required money to rent GPUs, CPUs, etc., but I didn't have the money. So I suggested it to my boss and waited patiently for feedback, which took a long time.


The Revelation: Why Docker?

Why am I telling you this? I had a revelation: if my code couldn't run on my friend's setup, what made me think it could run in a production environment?

Time was running out. I had multiple Jira tickets to complete, and two months in, I still couldn't complete them fully. Some tasks seemed easy on paper, like building a simple progress tracker, but there was so much lack of cooperation in the stack.

At that point, I accepted that frontend could be harder than backend because why was such a small thing taking weeks? It was non-negotiable because no user wants to look at a broken progress tracker. I almost gave up because the pressure to finish my tasks was intense.

Then I told myself, maybe things aren't working because my approach is wrong. So I did more research and decided to try using Docker to package my code.

Basically, Docker is a program that packages everything your code needs into an image that produces ephemeral container instances.


The Docker Gamble

Docker was a big gamble for me. It took hours to implement and came with many reality checks. It took a long time to build. My AI coding agent wasn't helping because it was performing actions I wouldn't attempt now. It erased my progress, and I ended up backtracking without realizing it at the time.

So I sat down, planned with my agent, and asked questions: Why rebuild this and not that? When should I clear the cache and volumes?

This saved me so much time because I had been spending hours on minor things like restarting containers when I actually needed to rebuild them to capture dependencies.

I enabled hot reload by binding mounts (code) to the services (backend and frontend), so the container could instantly capture my changes without rebuilding. This was a game changer because I could speed up my workflow while ensuring my code worked in an isolated environment.


The Persistence Challenge

Everything was perfect until I had to meet an objective that required the application to survive restarts. One thing going for me was that Docker containers handled commands continuously. I didn't have to worry about restarting a terminal to reflect changes, but the changes still had to reflect on the frontend.

My stack includes:

  • Next.js for the frontend
  • FastAPI for the backend
  • Countless model dependencies
  • Ollama for my local models

The Ollama Experiment

Why Ollama? I got on the hype train that local models are better because they safeguard your data, have no rate limiting, and allow free requests at no cost. This sounded amazing.

As I mentioned earlier, I was building a translator. Choosing models outside of APIs was a terrible experience. The NMT models I was using were doing literal translations without context, which was bad. I spent a lot of time searching for free models and went through a lot of trouble, but I guess that's the tradeoff you pay for.

Then my boss sent me a link to TranslateGemma, an open-source Google model. It was perfect at the time. Even better, a prebuilt Ollama image was available, so all I had to do was run:

ollama pull translategemma:4b

My system couldn't handle the 12B and 27B parameter versions, but I was committed. I set up a container for Ollama, and everything worked perfectly for a few days.

Then I tested long-running content, and everything went wrong. Ollama was consuming all the memory allocated to my containers, and resources ran out. I decided to run Ollama locally instead.

But then I asked myself: if it consumes that much memory, is it even viable in production? That would mean high costs. I kept that thought at the back of my mind.


Enter Redis: The Swiss Army Knife

My project has two login methods: a demo login and an auth login. I built the demo login for fast prototyping because my boss initially insisted on it. It used local storage in memory. I set aside the auth login for later and spent most of my time in demo mode so that whenever my boss requested progress, I could provide it immediately.

However, storing outputs in local memory contradicted the requirement for the app to survive restarts because local memory wouldn't work in production. So I had to find a solution.

That's when I discovered Redis, a small, lightweight instance with many uses, from caching to persistent storage. It was very fast. At first, I used it as storage for my demo jobs since I wasn't using a database, and it worked perfectly.

Unlocking Redis' Potential

Later, I realized I wasn't using Redis to its full potential. I had my backend send APIs to Redis to query task progress, and the frontend fetched progress from the backend—but that failed.

I tried using WebSockets to update progress, and it worked partially. I was excited, especially since I needed progress bars for five tasks.


The Final Piece: Celery

I had often heard that Redis and Celery go hand in hand. I used to wonder what RabbitMQ and Kafka were. It turns out they are message brokers. And once again, my superhero Redis could act as both a message broker and a task queue—that's four roles.

Celery, as I understand it, is the worker that offloads tasks from the task queue, processes them, and sends a completion message to the broker to notify the frontend to fetch results.

It took me hours to implement, but this was genuinely one of the best things I have ever done. It fixed so many issues. I realized FastAPI was part of the problem—it couldn't handle many concurrent tasks asynchronously the way I needed. That's why I never got proper updates before.

Now all my progress trackers work, and all I have left to do is make the progress bars look uniform.


To be continued...

Top comments (0)