From Non-Profit Ops Manager to Building Neural Networks: Week 1
Six months ago, I was managing operations for a basketball association. Scheduling, budgets, membership data, spreadsheets. Good work, meaningful work — but I kept looking sideways at what was happening in AI and feeling like I was watching the most important technological shift in history from the wrong side of the fence.
So I made a decision. I was going to get to the other side.
Where I Started
I'm not starting from zero. I completed a HyperionDev Data Science Bootcamp earlier this year, graduating first in my class. I can wrangle data, build basic ML models, and deploy them. But data analysis and actually working deep in AI are two very different things.
My goal isn't to become a data analyst who dabbles in machine learning. I want to be working at the frontier of AI development within the next 5-6 years. Building training environments. Working on agent systems. The kind of work that actually shapes where this technology goes.
That's the target. It's ambitious. It's probably going to take everything I have. I'm okay with that.
Week 1: What I Actually Built
I didn't spend Week 1 watching tutorials and feeling inspired. I built things.
House Price Predictor — a full end-to-end ML web app. Real dataset (545 housing records), data cleaning pipeline, model selection (tested Linear Regression vs Random Forest, documented why Linear won on this dataset), and deployed live on Streamlit Cloud. You can actually use it right now.
Sales Analytics Dashboard — A Streamlit-based sales analytics web app built with production-grade architecture. Real dataset (Superstore with 545 records), robust data validation pipeline (multi-delimiter/encoding support), 8 interactive visualization types, and dynamic filtering. 94.9% test coverage with unit, integration, and UI tests. Deployed live on Streamlit Cloud — upload your own CSV or explore the sample Superstore dataset in real-time.
Neural Network from Scratch — not using PyTorch, not using scikit-learn. Pure NumPy. I implemented a perceptron with sigmoid activation and gradient descent by hand, tested it on AND, OR, and NAND logic gates, and then threw it at a digit recognition task. Watching loss curves drop as the weights update is genuinely one of the most satisfying things I've experienced learning anything.
I also started Fast.ai's Practical Deep Learning course — which, by the way, is completely free and genuinely excellent — and got through Lesson 1 including training my first image classifier.
What Struck Me This Week
I knew AI was big. I didn't fully appreciate how big until I started pulling on the threads.
Deep learning alone is a rabbit hole with no visible bottom. Then there's reinforcement learning, multi-agent systems, distributed training infrastructure, interpretability research, alignment work — and these aren't shallow topics. Each one is a career's worth of depth. The field isn't just large. It's larger than any single field I've encountered, and it's expanding faster than anyone can fully track.
That's not intimidating to me. It's electric.
Six months ago I was building financial models in Excel. This week I implemented backpropagation by hand. The pace of what's possible when you commit fully to learning something is genuinely surprising, even to me.
The Honest Part
I'm at the tip of the iceberg. I know that.
I can build and deploy ML models, but I don't yet have the depth in reinforcement learning, distributed systems, or research methodology that serious AI work requires. I haven't published anything. I don't have a computer science degree. The people working at the frontier of this field are extraordinary, and I'm not there yet.
But I'm not trying to be there yet. I'm trying to be there in six years. And right now, Week 1 of that journey, I'm exactly where I need to be — building foundations, staying consistent, and moving forward every single day.
What's Next
Week 2 kicks off with finishing Fast.ai Part 1, going deeper on CNNs, and implementing a Transformer architecture from scratch. From there the plan moves into reinforcement learning fundamentals — the area I'm most excited about, and the direction I want to ultimately specialise in.
I'll be documenting the journey here as I go. The wins, the confusion, the errors I couldn't figure out for two hours, and the moments where something finally clicks.
If you want to follow along or check out what I've been building, my GitHub is here: github.com/JemHRice
The house price predictor, and the sales dashboard, are live if you want to poke around at it.
Once the fundamentals are locked in, I genuinely cannot wait to see what this skillset opens up. The projects that feel out of reach right now won't always be. That's what keeps me going.
Week 1 complete. See you next week.
Top comments (0)