<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Erick Mwangi Muguchia </title>
    <description>The latest articles on DEV Community by Erick Mwangi Muguchia  (@muguchiaerickmwangi).</description>
    <link>https://dev.to/muguchiaerickmwangi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/muguchiaerickmwangi"/>
    <language>en</language>
    <item>
      <title>Building a Budget Home Lab for Local LLMs</title>
      <dc:creator>Erick Mwangi Muguchia </dc:creator>
      <pubDate>Fri, 24 Apr 2026 04:42:05 +0000</pubDate>
      <link>https://dev.to/muguchiaerickmwangi/building-a-budget-home-lab-for-local-llms-4l9l</link>
      <guid>https://dev.to/muguchiaerickmwangi/building-a-budget-home-lab-for-local-llms-4l9l</guid>
      <description>&lt;p&gt;Yesterday—April 23rd I sat down with a blank note and a messy spreadsheet. Not for a client project or a work task. Just for something that’s been gnawing at me for weeks: &lt;strong&gt;I want to run large language models on my own hardware, at home, on my terms.&lt;/strong&gt; Not in a cloud notebook that spins down after an hour. Not behind a metered API key. Something I can experiment with at 2 a.m. without worrying about surprise bills.&lt;/p&gt;

&lt;p&gt;It started as a casual “what if,” and by the end of the evening I had a parts list, a stack of tradeoffs, and a very real budget scrawled on a piece of paper. I’m sharing that blueprint here—not as a polished build guide, but as a developer’s logbook. The point isn’t to show off a flawless system. It’s to think through the decisions, admit the unknowns, and maybe get some advice from folks who’ve walked this path before.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Core Idea
&lt;/h2&gt;

&lt;p&gt;Host some websites, tinker with local LLMs (think Llama, Mistral, Phi), and eventually offer a small hosted inference service to friends or local devs—something modest that might grow over time. The long-term dream: a self-contained AI lab where I control the hardware, the models, and the data.&lt;/p&gt;

&lt;p&gt;But first, reality: I have a limited budget and I’m piecing this together part by part in a region where used enterprise gear isn’t as easy to come by. I’m in Kenya, so prices here are in Kenyan Shillings (KSh), but I’ll include rough USD equivalents for global context.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hardware Blueprint (v0.1)
&lt;/h2&gt;

&lt;p&gt;Here’s what I’m planning to put in the case. It’s not exotic. It’s not a server rack full of A100s. It’s a quiet desktop that I hope will punch above its weight for quantized models.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Choice&lt;/th&gt;
&lt;th&gt;Actual KSh Range&lt;/th&gt;
&lt;th&gt;Real-World USD&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GPU&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;RTX 3060 12GB (new/used, local market)&lt;/td&gt;
&lt;td&gt;45,000 – 63,000&lt;/td&gt;
&lt;td&gt;$350 – $490&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CPU&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Ryzen 5 3600 (tray/used)&lt;/td&gt;
&lt;td&gt;~13,000&lt;/td&gt;
&lt;td&gt;$100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Motherboard&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;B450 / B550 (decent VRMs)&lt;/td&gt;
&lt;td&gt;7,700 – 14,200&lt;/td&gt;
&lt;td&gt;$60 – $110&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;RAM&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;32GB DDR4 (single 32GB stick)&lt;/td&gt;
&lt;td&gt;16,500 – 18,950&lt;/td&gt;
&lt;td&gt;$128 – $147&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;PSU&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;650W – 750W 80+ Bronze/Gold&lt;/td&gt;
&lt;td&gt;8,000 – 10,000&lt;/td&gt;
&lt;td&gt;$62 – $77&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Case + Cooling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Budget airflow case, 3 fans&lt;/td&gt;
&lt;td&gt;3,000 – 6,000&lt;/td&gt;
&lt;td&gt;$23 – $46&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~93,000 – 125,000&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$720 – $970&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That’s a wide range, I know. The final number will depend heavily on whether I snag a decent used GPU and how fussy I get about the motherboard and PSU. But even at the upper end, it’s cheaper than a mid-range gaming laptop.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Each Part Earned Its Spot
&lt;/h2&gt;

&lt;h3&gt;
  
  
  GPU: RTX 3060 12GB over the 3090
&lt;/h3&gt;

&lt;p&gt;I spent a long time staring at RTX 3090 listings. 24 GB of VRAM is seductive—it can run unquantized 13B models, maybe even a quantized 30B without breaking a sweat. But at $700–$1000 used, it would eat up my entire budget. I’d be left with a beast of a GPU and no money for a CPU, RAM, or even a case to put it in.&lt;/p&gt;

&lt;p&gt;So I pivoted. The RTX 3060 12GB. Not the 8GB variant—that 4 GB matters a lot for LLMs. 12 GB of VRAM will comfortably hold a 4-bit quantized Llama-2-13B or Mistral, and leaves room for the KV cache during inference. For smaller models like Phi-2 or a q4 Llama-7B, I’ll even have VRAM to spare for batching. Yes, I’ll buy used. Yes, I’m nervous about mining cards. But the local market has some ex-gaming units, and I’ll stress-test before committing.&lt;/p&gt;

&lt;p&gt;The 3060’s power draw is also a plus: ~170W TDP means I won’t need a 1000W PSU, keeping the build more efficient and quieter.&lt;/p&gt;

&lt;h3&gt;
  
  
  CPU: Ryzen 5 3600 (used/refurb)
&lt;/h3&gt;

&lt;p&gt;For pure LLM inference, the CPU isn’t the star—the GPU does the heavy lifting. But I still need enough muscle to handle API serving, web hosting, and any CPU-bound pre/post processing. A Ryzen 5 3600 is a 6-core/12-thread workhorse that’s dirt cheap on the used market. If I ever decide to experiment with CPU offloading for huge models or run a local vector database, it won’t be embarrassingly slow. And the AM4 platform gives me an upgrade path to a 5000-series chip later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Motherboard: B450 or B550 with spare PCIe
&lt;/h3&gt;

&lt;p&gt;I don’t need bleeding edge. I need one x16 slot for the GPU, maybe a spare x4 slot for a future NVMe adapter or a second NIC. Both B450 and B550 boards handle that fine. I’ll pick whichever has decent VRM cooling and is available at a reasonable price. The ability to drop in a faster Ryzen CPU later is a bonus.&lt;/p&gt;

&lt;h3&gt;
  
  
  RAM: 32 GB DDR4, no compromises
&lt;/h3&gt;

&lt;p&gt;I'm buying 32GB upfront. For LLM serving, system RAM acts as a safety net—even when the GPU carries the model, the CPU still needs space for the OS, background services, context buffers, and any offloaded layers. A single 32GB DDR4 stick (around KSh 16,500–19,000) gives me all the headroom I need right now and leaves one slot open for a future jump to 64GB if I ever start running heavier multi-model setups or large vector databases. Waiting on RAM is a false economy when I'm this close to a balanced build.&lt;/p&gt;

&lt;h3&gt;
  
  
  Power Supply: 650W–750W, with tomorrow in mind
&lt;/h3&gt;

&lt;p&gt;I see a lot of online advice shouting “800W minimum!” for high-end GPUs, but they’re talking about RTX 3090s and 4090s. For a 3060 + Ryzen 5 setup, a quality 650W unit is more than enough. I’m leaning toward 750W for a bit of headroom—if I ever upgrade to a hungrier GPU (used 3090 prices might drop), I won’t need to replace the PSU. I’m sticking to a reputable brand, though, because a cheap PSU is a false economy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Case and Cooling: Airflow over aesthetics
&lt;/h3&gt;

&lt;p&gt;No RGB glass panels here—just a mesh-front case with three fans. Good airflow is important because the GPU will be running flat-out during inference sessions, and I want to keep thermals low enough that the card’s fans aren’t screaming. I’ll probably undervolt the GPU slightly for efficiency.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Build Should Actually &lt;em&gt;Do&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;I’m setting realistic expectations. With 12 GB VRAM, this machine will shine with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Llama-7B and 13B (4-bit quantized)&lt;/strong&gt; – via GGUF/MLC formats&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mistral-7B&lt;/strong&gt; variants – tiny, fast, excellent for code and chat&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phi-3-mini&lt;/strong&gt; – small but surprisingly capable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ollama&lt;/strong&gt; for dead-simple local model serving&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LM Studio&lt;/strong&gt; for a GUI-based playground when I just want to test prompts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vLLM&lt;/strong&gt; later on, once I’m comfortable with the setup and want higher throughput for an API endpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I actually can fine-tune on this box—QLoRA makes it practical. With 12 GB VRAM, I can run a 4-bit quantized 7B model and apply low-rank adaptation using standard settings without breaking a sweat. It won't be fast, but it'll work. Full fine-tuning is still out of reach, but for my experiments, QLoRA on a 7B model is more than enough.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Uncomfortable Part: What I’m Still Unsure About
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Used GPU reliability.&lt;/strong&gt; I’ve bought used electronics before, but a GPU that’s been run 24/7 in a mining rig or a dusty gaming tower is a gamble. I’ll run FurMark and a VRAM stress test, but there’s always a chance of early failure. I’d love to hear how others vet used cards for AI workloads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VRAM vs model ambitions.&lt;/strong&gt; 12 GB puts a hard ceiling on model size. Right now I’m happy with 7B–13B quantized, but I can feel the itch to run larger open models (like Command R or a future Llama-3-70B-ish thing). I keep asking myself: &lt;em&gt;will I regret not saving up for a 3090?&lt;/em&gt; The honest answer is: maybe. But learning on a 3060 is better than dreaming about a 3090 that never arrives.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Power stability at home.&lt;/strong&gt; Brownouts and voltage swings aren’t unheard of in my area. I’ll need at least a basic UPS with AVR (automatic voltage regulation) to protect the hardware, and I haven’t priced that in yet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Noise in a living space.&lt;/strong&gt; This will sit in my apartment, not a dedicated server room. Inference won’t max out the GPU like gaming, but continuous serving might. I need to test fan curves and maybe swap in quieter case fans.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monetization reality check.&lt;/strong&gt; I mentioned “selling as a service.” I mean something humble: maybe offer a private API endpoint to local developers who want to experiment without paying for cloud tokens. It’s not a startup; it’s a side experiment that might offset some electricity costs. I genuinely don’t know if anyone will pay, or how to price it fairly.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Where This Is All Heading
&lt;/h2&gt;

&lt;p&gt;The first goal is simple: &lt;strong&gt;get the machine booting, install Ollama, and make it work.&lt;/strong&gt; I’ll spend a week or two just playing with models, measuring token/s speeds, and learning the quirks of managing a local inference stack.&lt;/p&gt;

&lt;p&gt;From there, I plan to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run a persistent LM Studio server or Ollama for personal use—coding assistants, document summarization, etc.&lt;/li&gt;
&lt;li&gt;Set up a Dockerized environment where I can spin up different model backends and test frameworks (LangChain, LlamaIndex, maybe a local RAG pipeline).&lt;/li&gt;
&lt;li&gt;Explore &lt;strong&gt;vLLM&lt;/strong&gt; to understand high-throughput serving, including batching and PagedAttention.&lt;/li&gt;
&lt;li&gt;Host a few lightweight web apps alongside the inference service—personal projects that benefit from local AI.&lt;/li&gt;
&lt;li&gt;Eventually, try to monetize: sell API credits in bulk, or offer a “bring your own model” endpoint for students/hobbyists in my network. If it gains any traction, I’d reinvest every cent into upgrades (more RAM, better GPU, maybe a second identical machine for redundancy).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No grand roadmap. No “disrupt the industry” rhetoric. Just a developer growing a system piece by piece, learning along the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Quick Ask to the Dev Community
&lt;/h2&gt;

&lt;p&gt;I’d genuinely appreciate your wisdom:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Used GPUs for AI:&lt;/strong&gt; What tests do you run before buying? Any telltale signs of a card that’s been thermally tortured?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAM vs VRAM:&lt;/strong&gt; For a local LLM server, would you prioritize 32 GB system RAM first, or save every penny toward a bigger GPU?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PSU headroom:&lt;/strong&gt; Is 750W a reasonable ceiling for a single high-end future card, or should I just bite the bullet and buy an 850W unit now?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monetization:&lt;/strong&gt; Have any of you offered a local LLM API to others? How did you handle billing, rate limiting, or uptime?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’ve been down this road, or if you’re staring at a similar blueprint right now, maybe drop a comment. I’d love to hear what you’d do differently, what you’d keep the same, and what you wish someone had told you before you powered on that first home-built AI box.&lt;/p&gt;

&lt;p&gt;This is just the beginning. The next post will hopefully have photos of a real, built machine, not just a parts list and a prayer. Until then, I’ll be refreshing listings for used RTX 3060s and trying to figure out if that “too-good-to-be-true” deal is a trap.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What’s your home lab running?&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>llm</category>
      <category>programming</category>
    </item>
    <item>
      <title>Monte Carlo simulation engine for Carbon and Climate-economic risk. Its is a model for how cities can visualize their climate futures.</title>
      <dc:creator>Erick Mwangi Muguchia </dc:creator>
      <pubDate>Sat, 18 Apr 2026 13:21:28 +0000</pubDate>
      <link>https://dev.to/muguchiaerickmwangi/monte-carlo-simulation-engine-for-carbon-and-climate-economic-risk-its-is-a-model-for-how-cities-38cp</link>
      <guid>https://dev.to/muguchiaerickmwangi/monte-carlo-simulation-engine-for-carbon-and-climate-economic-risk-its-is-a-model-for-how-cities-38cp</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for &lt;a href="https://dev.to/challenges/weekend-2026-04-16"&gt;Weekend Challenge: Earth Day Edition&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;Instead of building a basic personal carbon footprint tracker, I wanted to tackle the macro problem: convincing governments to act. This project directly honors Earth Day by providing a localized, mathematical engine to prove that climate action is an economic necessity.&lt;br&gt;
This project models climate change as a risk problem, not just an environmental issue. By simulating emissions and their economic consequences, it reframes sustainability as something governments and institutions cannot afford to ignore.&lt;/p&gt;

&lt;p&gt;For a city like Nairobi, the question becomes clear: reducing emissions isn’t just good for the planet, it stabilizes the future economy. That’s the argument this tool is built to prove.&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;The concept applies actuarial risk math (Monte Carlo simulations) to climate science. Instead of just proving that emissions drop, it proves that economic volatility and extreme risk drop. Piping that hard math directly into an AI (Google Gemini) to instantly generate human-readable policy briefs bridges the gap between data scientists and politicians.&lt;/p&gt;

&lt;p&gt;The Math &lt;code&gt;**models.py**&lt;/code&gt;: Inside this file, the code literally flips a weighted coin every single year for 20 years to add random surprises (stochastic shocks). Sometimes emissions randomly spike, sometimes they crash.&lt;/p&gt;

&lt;p&gt;The Engine &lt;code&gt;**simulator.py**&lt;/code&gt;: It plays out these 20 years, 10,000 separate times. It runs it once doing nothing ("Baseline") and once where the city cuts emissions ("Intervention").&lt;/p&gt;

&lt;p&gt;The Proof &lt;code&gt;**visualization.py**&lt;/code&gt;: It takes all 10,000 parallel universes and draws a funnel (the graphs). It proves that climate action doesn't just reduce carbon, it drastically reduces risk and volatility. The resulting graph visually proves that by acting now, we stop the worst-case, most expensive catastrophic futures from happening.&lt;/p&gt;

&lt;p&gt;The AI &lt;code&gt;**policy_brief.py**&lt;/code&gt;: Raw statistical variance is boring to politicians. So you take the billions of dollars saved in those 10,000 universes and feed them into Google Gemini, asking it to write an inspiring, plain-English speech for the local city council based solely on your math.&lt;/p&gt;

&lt;p&gt;This project is intentionally built as a lightweight  CLI tool. It runs entirely on a CPU with no external dependencies, making it usable in low-resource environment. &lt;br&gt;
Plus I automated the demo with &lt;strong&gt;&lt;code&gt;run_demo.sh&lt;/code&gt;&lt;/strong&gt; file.&lt;br&gt;
  &lt;iframe src="https://www.youtube.com/embed/9VaCjz3XECg"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This clip shows the simulator running — baseline emissions rise while intervention drives them toward zero.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitation &amp;amp; Context
&lt;/h2&gt;

&lt;p&gt;This simulator is designed as a conceptual actuarial engine, not a direct measurement of Nairobi’s emissions. The numbers are stylized to demonstrate how intervention changes risk trajectories, using Nairobi as a case study.&lt;/p&gt;

&lt;p&gt;Several external factors are not yet modeled, including population growth, energy mix, transport electrification, and economic shocks. These would all influence real‑world emissions. For now, the model focuses on the risk dynamics: baseline emissions rise with widening uncertainty, while intervention collapses both emissions and variance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;You can find the code on github:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ricsmwangi" rel="noopener noreferrer"&gt;
        ricsmwangi
      &lt;/a&gt; / &lt;a href="https://github.com/ricsmwangi/carbon-risk-simulator" rel="noopener noreferrer"&gt;
        carbon-risk-simulator
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Carbon Risk Simulator is a lightweight, actuarial-inspired simulation tool that models climate change as a risk problem rather than just an environmental trend.  Instead of producing a single forecast, the system uses Monte Carlo simulation to generate thousands of possible future emission paths.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Nairobi Carbon Risk Simulator&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;A lightweight, pure-Python Monte Carlo simulation engine designed to project carbon emissions pathways and their associated economic damages. Built specifically for Earth Day risk assessments and localized climate action policy.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;What it is&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Unlike linear carbon calculators, the ** Nairobi Carbon Risk Simulator** leverages actuarial mathematics to model deep uncertainty. It applies stochastic random shocks (normal and lognormal distributions) across 10,000+ simulated futures to measure not just how many emissions might be reduced, but how much &lt;em&gt;volatility and economic risk&lt;/em&gt; is eliminated by taking climate action.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;
&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ultra-Lightweight CLI:&lt;/strong&gt; Stripped of heavy dataframe dependencies dataframes. Relies entirely on &lt;code&gt;numpy&lt;/code&gt; for blazing fast, vectorized Monte Carlo matrices and native Python &lt;code&gt;csv&lt;/code&gt; for exports.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stochastic Risk Modeling:&lt;/strong&gt; Compare "Business as Usual" against intervention scenarios across 20-year horizons.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Visualization:&lt;/strong&gt; Autogenerates high-resolution probabilistic funnels (5th-95th percentiles) and final-outcome histograms using &lt;code&gt;matplotlib&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Policy Translation:&lt;/strong&gt; Integrates &lt;strong&gt;Google Gemini&lt;/strong&gt; to…&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ricsmwangi/carbon-risk-simulator" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;I approached this project from an actuarial mindset — focusing on risk and uncertainty instead of just prediction.&lt;/p&gt;

&lt;p&gt;At the core, I used Monte Carlo simulation to model thousands of possible future emission paths over time. Instead of a single outcome, the system generates a range of scenarios, showing not only expected emissions but also variability and extreme cases.&lt;/p&gt;

&lt;p&gt;I built the simulation engine in Python using NumPy for fast computations. The model is parameter-based, so values like emission rates, reduction percentages, and economic impact can easily be adjusted.&lt;/p&gt;

&lt;p&gt;To make the results more meaningful, I added scenario comparison:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a baseline (no changes)&lt;/li&gt;
&lt;li&gt;and a reduced-emission scenario (e.g. 20%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our base model targets a 20% reduction, but the Monte Carlo engine can instantly simulate any alternative policy percentage, like 50% or 80%, for sensitivity analysis.&lt;/p&gt;

&lt;p&gt;This allows the system to show how policy decisions affect both emissions and economic risk over time.&lt;/p&gt;

&lt;p&gt;Random seeds are fixed so results can be replicated exactly.&lt;/p&gt;

&lt;p&gt;I also used GitHub Copilot to help expand and structure the idea. It assisted with refining the simulation logic and organizing the code, while I focused on the modeling approach and assumptions.&lt;/p&gt;

&lt;p&gt;Finally, I kept everything as a simple CLI tool. This keeps the project lightweight, fast, and able to run on any machine without special requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prize Categories
&lt;/h2&gt;

&lt;p&gt;I am submitting this project under two categories:&lt;/p&gt;

&lt;p&gt;Best Use of GitHub Copilot&lt;br&gt;
Copilot was essential in scaffolding the actuarial engine and Monte Carlo simulation logic. It accelerated development by suggesting reproducible code structures, statistical functions, and console formatting for the Earth Day–themed demo output. Copilot helped keep the workflow modular and clear, ensuring the simulator could scale to multiple scenarios.&lt;/p&gt;

&lt;p&gt;Key emphasis:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Guided the modular design of the simulator, from models.py to policy_brief.py, ensuring reproducibility and clarity.&lt;/li&gt;
&lt;li&gt;Polished console outputs, turning raw math into a clean demo flow.&lt;/li&gt;
&lt;li&gt;Accelerated iteration speed, letting me focus on actuarial modeling instead of boilerplate code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Best Use of Google Gemini&lt;br&gt;&lt;br&gt;
Gemini transformed raw simulation results into formal policy briefs. By piping CSV summaries and scenario outcomes into Gemini, the simulator automatically generated council‑ready narratives that reframed actuarial math into persuasive, human‑readable language for decision makers. This bridged the gap between technical rigor and policy communication.&lt;/p&gt;

&lt;p&gt;Key emphasis:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reframed statistical variance collapse into persuasive council speeches.&lt;/li&gt;
&lt;li&gt;Bridged actuarial math with civic communication, making the simulator resonate beyond the data.&lt;/li&gt;
&lt;li&gt;Aligned perfectly with the Earth Day theme by translating technical outputs into accessible calls for urgent climate action.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devchallenge</category>
      <category>weekendchallenge</category>
    </item>
    <item>
      <title>No GPU? No problem!, running local AI efficiently on my CPU.</title>
      <dc:creator>Erick Mwangi Muguchia </dc:creator>
      <pubDate>Tue, 14 Apr 2026 20:21:12 +0000</pubDate>
      <link>https://dev.to/muguchiaerickmwangi/no-gpu-no-problem-running-local-ai-efficiently-on-my-cpu-1fhf</link>
      <guid>https://dev.to/muguchiaerickmwangi/no-gpu-no-problem-running-local-ai-efficiently-on-my-cpu-1fhf</guid>
      <description>&lt;h2&gt;
  
  
  1. Why I tried this.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  2. My setup.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  3. The problems I faced.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  4. The tweaks I discovered.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  5. Results.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  6. Lessons.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  THE WHY:
&lt;/h2&gt;

&lt;p&gt;I’ve always wanted to explore deep conversations with AI and understand how these systems work. For a long time, that dream was limited by the lack of a GPU and the high cost of apps that allow meaningful interaction with AI models. Now, I’m determined to overcome those barriers and build my own path into this world.&lt;/p&gt;

&lt;p&gt;So the idea of running one locally always kept bugging me, and I finally got to do it.&lt;br&gt;
It was a long and educational journey, so let me walk you through it.&lt;/p&gt;
&lt;h2&gt;
  
  
  My setup
&lt;/h2&gt;

&lt;p&gt;OS: Arch Linux x86_64&lt;br&gt;
Workflow: tmux + i3 (just because  I like using key-bindings), starship + wezterm &lt;br&gt;
Hardware :CPU: Intel(R) Core(TM) i5-7200U (4) @ 3.10 GHz&lt;br&gt;
          GPU: Intel HD Graphics 620 @ 1.00 GHz [Integrated]&lt;br&gt;
          Memory: 2.75 GiB / 7.61 GiB (36%)&lt;br&gt;
          Swap: 666.95 MiB / 3.81 GiB (17%)&lt;/p&gt;

&lt;p&gt;Storage:  Disk (/): 28.64 GiB / 31.20 GiB (92%) - ext4&lt;br&gt;
          Disk (/home): 39.15 GiB / 84.33 GiB (46%) - ext4&lt;br&gt;
          Disk (/run/media/shinigami/Vault): 349.96 GiB / 457.38 GiB (77%) - ext4&lt;/p&gt;

&lt;p&gt;Ollama is the engine I used to run models locally. By default, it stores models in ~/.ollama, which quickly filled my root partition.&lt;br&gt;
install with:&lt;br&gt;
&lt;code&gt;curl -fsSL https://ollama.com/install.sh | sh&lt;/code&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  To fix this I redirected the models to a secondary storage.
&lt;/h1&gt;

&lt;p&gt;&lt;code&gt;mv ~/.ollama /run/media/shinigami/Vault/ollama&lt;/code&gt;&lt;br&gt;
&lt;code&gt;ln -s /run/media/shinigami/Vault/ollama ~/.ollama&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This symlink forces Ollama to store everything on the Hard Drive , solving the disk full error.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Now pulling and managing models.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I started by pulling the smallest model, so I went for the Tinyllama, which is about 637 MB.&lt;br&gt;
After testing the model, &amp;gt; fast yes but not that smart, so I had to look for a bit smarter models.&lt;br&gt;
&lt;code&gt;ollama pull tinyllama&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Therefore I went ahead and  pulled heavy models like llama3.2:3b, and llama3.2:1b, which were 2.0 GB and 1.3 GB respectively.&lt;br&gt;
&lt;code&gt;ollama pull llama3.2:3b&lt;/code&gt;&lt;br&gt;
&lt;code&gt;ollama pull llama3.2:1b&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Building custom builds for them&lt;/strong&gt;&lt;br&gt;
So I wanted maximum output/use of the models so I created custom models using modelfiles.&lt;br&gt;
Example;&lt;br&gt;
I wanted one to help me with learning basic networking concepts, the model files looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; llama3.2:3b # Adjust accordingly&lt;/span&gt;

&lt;span class="c"&gt;# ---------- PARAMETERS ----------&lt;/span&gt;
&lt;span class="c"&gt;# Lower temperature for accuracy; moderate top_p for variety without drifting.&lt;/span&gt;
PARAMETER temperature 0.25
PARAMETER top_p 0.9

&lt;span class="c"&gt;# More room for code + explanations.&lt;/span&gt;
PARAMETER num_ctx 4096

&lt;span class="c"&gt;# Safer repetition control (prevents looping).&lt;/span&gt;
PARAMETER repeat_penalty 1.12

&lt;span class="c"&gt;# If your setup supports it and you want more deterministic answers, you can also try:&lt;/span&gt;
&lt;span class="c"&gt;# PARAMETER seed 42&lt;/span&gt;

&lt;span class="c"&gt;# ---------- SYSTEM BEHAVIOR ----------&lt;/span&gt;
SYSTEM """
You are My Mentor: a concise, practical networking fundamentals tutor and coding assistant.

Primary goal:
- Teach networking fundamentals clearly and correctly (OSI/TCP-IP, IP addressing &amp;amp; subnetting, ARP, DNS, DHCP, TCP vs UDP, ports/sockets, routing, NAT, HTTP/TLS basics, troubleshooting with ping/traceroute/nslookup/curl/tcpdump, basic firewalls).

Secondary goal:
- Produce meaningful, runnable code examples when useful prefer (enter preferred language).

Style rules:
- Explain concepts with short definitions + one concrete example.
- When writing code, include: what it does, how to run it, expected output, and common pitfalls.
- If the user’s question is ambiguous, ask up to 2 clarifying questions before answering.

Output format (default):
1) Concept (2–5 sentences)
2) Why it matters (1–2 bullets)
3) Example (diagram, packet flow, or command)
4) Code (only if it adds value; keep it minimal)
5) Quick check (2–3 questions to self-test)
""" 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That snippet explains much of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the models
&lt;/h2&gt;

&lt;p&gt;After creating the Model file, &lt;code&gt;vim Modelfile1&lt;/code&gt;, now we bind it to the any models we downloaded.&lt;br&gt;
A Modelfile is the blueprint that shapes an AI model’s personality, rules, and behavior on top of its base intelligence.&lt;/p&gt;

&lt;p&gt;Command for creating from the Modelfile,&lt;br&gt;
&lt;code&gt;ollama create My-model -f Modelfile1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;you should see this output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gathering model components 
using existing layer sha256:dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff 
using existing layer sha256:966de95ca8a62200913e3f8bfbf84c8494536f1b94b49166851e76644e966396 
using existing layer sha256:fcc5a6bec9daf9b561a68827b67ab6088e1dba9d1fa2a50d7bbcc8384e0a265d 
using existing layer sha256:a70ff7e570d97baaf4e62ac6e6ad9975e04caa6d900d3742d37698494479e0cd 
creating new layer sha256:7f89bd8bf6ef609a9aefeab288cde09db6c1ef97f649691f25b29e0f85a8c91c 
creating new layer sha256:446b3a23f7599dc79a11cfb03c670091c9fe265aba28fa3316e9e46dc86365db 
writing manifest 
success 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;"My-model" &amp;gt;  you can name it anything.&lt;br&gt;
Plus you can create as many &lt;strong&gt;Modelfile&lt;/strong&gt; as you like giving them different task, and of course you can add more rules and examples in the &lt;strong&gt;Modelfile&lt;/strong&gt; as you like.&lt;/p&gt;

&lt;p&gt;After successfully creating 'My-model', run the model,&lt;br&gt;
&lt;code&gt;ollama run My-model&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;➜ ollama run My-model
&lt;/span&gt;&lt;span class="gp"&gt;&amp;gt;&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; Send a message &lt;span class="o"&gt;(&lt;/span&gt;/? &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;help&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="go"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That was the fun part, now the real problem was now when the models runs and now CPU starts screaming because of 100% CPU consumption, overheating, which ends up making the model work slow.&lt;/p&gt;

&lt;p&gt;So I had to optimize my setup to better handle the models.&lt;br&gt;
By using the tools to monitor CPU (htop), and heat (lm_sensors), I was able to better optimize my setup.&lt;br&gt;
To run the models efficiently I had to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Maximize CPU performance when running the models.&lt;br&gt;
Reduce latency of bottlenecks.&lt;br&gt;
Stabilize thermal behavior.&lt;br&gt;
Prioritize computer-heavy processes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Running local AI models on CPU introduces:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Lower parallelism.&lt;br&gt;
Thermal Throttling.&lt;br&gt;
OS scheduling inefficiencies.&lt;br&gt;
Power-saving defaults limiting performance.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So instead of forcing the model, You optimize around it.&lt;br&gt;
&lt;em&gt;Unlock CPU performance&lt;/em&gt;&lt;br&gt;
&lt;em&gt;In Arch&lt;/em&gt;&lt;br&gt;
&lt;code&gt;sudo pacman -s cpupower&lt;/code&gt;&lt;br&gt;
then install the tuned for changing CPU frequency state and for system wide optimization.&lt;br&gt;
&lt;code&gt;sudo pacman -S tuned&lt;/code&gt;&lt;br&gt;
Then enable it :&lt;br&gt;
&lt;code&gt;sudo systemctl enable --now tuned&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then change the CPU performance;&lt;br&gt;
&lt;code&gt;sudo cpupower frequency-set -g performance&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Switches CPU governor: By default, Linux CPUs often run in “ondemand” or “powersave” mode, scaling frequency up and down depending on load.&lt;br&gt;
Performance mode: Locks the CPU to its maximum frequency, ensuring consistent speed.&lt;/p&gt;

&lt;p&gt;Impact:&lt;br&gt;
Faster response times for heavy workloads (like tokenization, AI inference, or compiling).&lt;br&gt;
Reduced latency spikes since the CPU doesn’t waste time ramping up.&lt;br&gt;
More predictable benchmarking results.&lt;/p&gt;

&lt;p&gt;cons:&lt;br&gt;
Higher power draw, more heat, fans spin up, battery drains faster on laptops.&lt;/p&gt;

&lt;p&gt;Then confirm the configuration by,&lt;br&gt;
&lt;code&gt;cpupower frequency-info&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;➜ cpupower frequency-info
driver: acpi-cpufreq
hardware limits: 400 MHz - 2.60 GHz
available cpufreq governors: conservative ondemand userspace powersave performance schedutil
current policy: governor "performance" within 400 MHz - 2.50 GHz
current CPU frequency: 3.10 GHz (kernel reported)
boost state: Supported, Active

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then apply throughput performance profile, using the 'tuned' we installed earlier.&lt;br&gt;
&lt;code&gt;sudo tuned-adm profile throughput-profile&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Why this matters:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Optimizes CPU behaviors.&lt;br&gt;
Improves disk I/O.&lt;br&gt;
Adjusts system scheduling.&lt;br&gt;
Reduces unnecessary power-saving interruptions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Results: Smoother, sustained compute performance. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model am using (and why)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;llama3.2:3b&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Balanced size and capability.&lt;br&gt;
Noticeably smart.&lt;br&gt;
Good for deeper prompts and reasoning.&lt;br&gt;
This felt like middle ground between speed and intelligence.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;em&gt;phi3:mini&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Very efficient for its size.&lt;br&gt;
Strong reasoning compared to other small models.&lt;br&gt;
Optimized for lower-resources environments&lt;br&gt;
This one stood out as surprisingly powerful for CPU use.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This concludes my first phase: setting up, tuning performance, and confirming that local AI models run smoothly. In the next phase, I’ll dive into measuring tokenization speed;  using verbose logs and custom C scripts to compare how these models perform under different workloads.&lt;br&gt;
&lt;em&gt;"Turns out you don't need powerful hardware to explore AI,  just curiosity and a stubborn CPU."&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Running local AI.</title>
      <dc:creator>Erick Mwangi Muguchia </dc:creator>
      <pubDate>Sun, 12 Apr 2026 15:14:03 +0000</pubDate>
      <link>https://dev.to/muguchiaerickmwangi/running-local-ai-14pb</link>
      <guid>https://dev.to/muguchiaerickmwangi/running-local-ai-14pb</guid>
      <description>&lt;p&gt;I had this idea to run AI locally on my own laptop. Just to see if I could. Ended up going with Ollama.&lt;/p&gt;

&lt;p&gt;At first it was brutal — all CPU, no GPU, super slow. But I messed around, tweaked some stuff, and finally got it to actually run okay. Not fast, but okay.&lt;/p&gt;

&lt;p&gt;Then I went down a rabbit hole. I wanted to know what the models were doing. Like, how hot is my CPU getting? How fast is it spitting out tokens? So I started building my own little monitoring setup. Used C for some low-level stuff, Dash for a live dashboard, Python to glue it all together. Oh and lm-sensors to watch the temps because this thing makes my laptop sweat.&lt;/p&gt;

&lt;p&gt;Now I can sit there and watch my models run in real time. Token rate, memory, core temps — all on a dashboard.&lt;/p&gt;

&lt;p&gt;Feels good having AI running offline. No cloud, no weird latency, just my machine. And a bunch of scripts I broke and fixed along the way.&lt;/p&gt;

&lt;p&gt;If you're thinking about trying local AI, just go for it. Just know you'll end up tinkering way more than you expect. Worth it though.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>monitoring</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Web apps.
I like making web apps ..for celebrations or for fun.
So its christmas and I made a small web app.
And I didn't use html or js or css...
But i used C programming to make it..
It was stressing but its good and am happy about it .</title>
      <dc:creator>Erick Mwangi Muguchia </dc:creator>
      <pubDate>Tue, 23 Dec 2025 04:30:38 +0000</pubDate>
      <link>https://dev.to/muguchiaerickmwangi/web-apps-i-like-making-web-apps-for-celebrations-or-for-fun-so-its-christmas-and-i-made-a-small-25i2</link>
      <guid>https://dev.to/muguchiaerickmwangi/web-apps-i-like-making-web-apps-for-celebrations-or-for-fun-so-its-christmas-and-i-made-a-small-25i2</guid>
      <description></description>
    </item>
    <item>
      <title>I made a promise to myself that am not leaving Meru University without Python skills.</title>
      <dc:creator>Erick Mwangi Muguchia </dc:creator>
      <pubDate>Fri, 12 Dec 2025 09:36:29 +0000</pubDate>
      <link>https://dev.to/muguchiaerickmwangi/i-made-a-promise-to-myself-that-am-leaving-meru-university-without-python-skills-2ag6</link>
      <guid>https://dev.to/muguchiaerickmwangi/i-made-a-promise-to-myself-that-am-leaving-meru-university-without-python-skills-2ag6</guid>
      <description>&lt;p&gt;When I arrived at Meru University, I made myself a deal:&lt;br&gt;
"I will not leave this place without learning Python."&lt;/p&gt;

&lt;p&gt;The first thing I did was relocate. I needed to minimize distractions and move to an environment conducive to focused learning.&lt;/p&gt;

&lt;p&gt;Why Python?&lt;/p&gt;

&lt;p&gt;I'd heard so much about it—web development, data science, AI, endless possibilities. I was determined to master it and open doors in tech. But I had zero programming knowledge. I knew it would be challenging, but I was willing to put in the effort.&lt;/p&gt;

&lt;p&gt;Month 1: Building Foundations&lt;/p&gt;

&lt;p&gt;I downloaded tutorials, read documentation, binged YouTube. I learned Python syntax, data types, control structures. Then I practiced—a lot. Small programs. Number games. These games weren't just practice; they made learning fun. That mattered more than I expected.&lt;/p&gt;

&lt;p&gt;Month 2–3: Going Deeper&lt;/p&gt;

&lt;p&gt;After a month, I decided to add complexity. I wanted to understand how programming actually works, not just write code. So I added C to my learning path.&lt;/p&gt;

&lt;p&gt;This wasn't random. Python was my safety net. C forced me to understand memory, pointers, how computers actually think. It made Python click in a new way.&lt;/p&gt;

&lt;p&gt;Learning both simultaneously was hard—but it worked.&lt;/p&gt;

&lt;p&gt;Month 4: The Full Picture&lt;/p&gt;

&lt;p&gt;As weeks turned into months, I got proficient in both. I signed up for GitHub's student pack (more resources, better tools). I learned version control—essential for any real programmer.&lt;/p&gt;

&lt;p&gt;Then came R for statistical programming and data visualization. Each language opened new doors.&lt;/p&gt;

&lt;p&gt;The Progress&lt;/p&gt;

&lt;p&gt;Now, as the semester ends, I can say this honestly: I've made significant progress.&lt;/p&gt;

&lt;p&gt;I have:&lt;br&gt;
✅ Multiple projects built and on GitHub&lt;br&gt;
✅ Mastery of Python, C, and R&lt;br&gt;
✅ Understanding of version control and collaborative development&lt;br&gt;
✅ A journaling habit that tracked every step&lt;/p&gt;

&lt;p&gt;The Real Win&lt;/p&gt;

&lt;p&gt;Learning these languages wasn't just about syntax. It boosted my confidence. It showed me I can learn anything if I commit to it.&lt;/p&gt;

&lt;p&gt;And I developed a habit of documenting everything—journaling my process, reflecting on struggles. That's been invaluable. Future me can look back and see exactly how I got here.&lt;/p&gt;

&lt;p&gt;What's Next&lt;/p&gt;

&lt;p&gt;I'm leaving Meru with a promise kept. But this isn't the end—it's the beginning. I'm excited to explore data science, build real applications, and help others learn like I did.&lt;/p&gt;

&lt;p&gt;The promise was simple. The journey changed everything.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>beginners</category>
      <category>learning</category>
      <category>100daysofcode</category>
    </item>
  </channel>
</rss>
