<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ronnie Kakunguwo</title>
    <description>The latest articles on DEV Community by Ronnie Kakunguwo (@zyenova).</description>
    <link>https://dev.to/zyenova</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/zyenova"/>
    <language>en</language>
    <item>
      <title>Introduction to LLM Engineering: Building Agentic AI in Low-Resource Settings</title>
      <dc:creator>Ronnie Kakunguwo</dc:creator>
      <pubDate>Thu, 30 Oct 2025 11:11:59 +0000</pubDate>
      <link>https://dev.to/zyenova/introduction-to-llm-engineering-building-agentic-ai-in-low-resource-settings-4c34</link>
      <guid>https://dev.to/zyenova/introduction-to-llm-engineering-building-agentic-ai-in-low-resource-settings-4c34</guid>
      <description>&lt;p&gt;Hello everyone,&lt;/p&gt;

&lt;p&gt;My name is &lt;strong&gt;Ronnie Kakunguwo&lt;/strong&gt;, a &lt;strong&gt;Biomedical Engineer&lt;/strong&gt; and &lt;strong&gt;AI researcher in training&lt;/strong&gt; from Zimbabwe.&lt;br&gt;&lt;br&gt;
I am currently exploring how &lt;strong&gt;Generative AI (GenAI)&lt;/strong&gt; and &lt;strong&gt;Large Language Models (LLMs)&lt;/strong&gt; can be applied in &lt;strong&gt;healthcare for low-resource settings&lt;/strong&gt;, particularly across Africa.  &lt;/p&gt;

&lt;p&gt;Over the past few weeks, I have been studying &lt;strong&gt;LLM Engineering&lt;/strong&gt;, a growing field that sits at the intersection of &lt;strong&gt;machine learning, systems design, and prompt engineering&lt;/strong&gt;. Through this short series, I will be sharing what I’m learning and experimenting with.  &lt;/p&gt;

&lt;p&gt;This first post introduces how these models work under the hood and how developers in low-resource environments can still build &lt;strong&gt;Agentic AI applications&lt;/strong&gt; without large computational budgets.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Are Large Language Models (LLMs)?
&lt;/h2&gt;

&lt;p&gt;At their core, &lt;strong&gt;LLMs&lt;/strong&gt; are large neural networks trained on vast amounts of text from the internet to &lt;strong&gt;predict the next word&lt;/strong&gt; in a sentence.  &lt;/p&gt;

&lt;p&gt;You can think of them as &lt;strong&gt;compressed knowledge systems&lt;/strong&gt; as they have read billions of pages and learned patterns in language, facts, reasoning, and even creativity.  &lt;/p&gt;

&lt;p&gt;A typical LLM, such as &lt;strong&gt;LLaMA 2&lt;/strong&gt; or &lt;strong&gt;GPT-4&lt;/strong&gt;, is stored as two main components:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Parameters file&lt;/strong&gt;: where all the learned knowledge (weights) is stored.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run file&lt;/strong&gt;: a small piece of code (in C or Python) that defines how those parameters are used during inference.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqgjjbzlmfrnpx86c2jp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqgjjbzlmfrnpx86c2jp.png" alt="A diagram showing the makeup of an LLM" width="506" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While training these models can cost millions, &lt;strong&gt;inference (using them)&lt;/strong&gt; is relatively cheap and that’s where developers like us can contribute meaningfully.&lt;/p&gt;




&lt;h2&gt;
  
  
  The LLM Lifecycle: From Base Model to Assistant
&lt;/h2&gt;

&lt;p&gt;LLMs typically go through three stages before becoming the assistants we interact with today:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Stage&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Data Used&lt;/th&gt;
&lt;th&gt;Output&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;1. Pre-training&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Model learns from billions of web pages to understand language patterns&lt;/td&gt;
&lt;td&gt;Large web crawl (~10TB)&lt;/td&gt;
&lt;td&gt;Base model (raw knowledge)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;2. Fine-tuning&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Model learns to respond helpfully and safely through curated Q&amp;amp;A datasets&lt;/td&gt;
&lt;td&gt;~100k examples&lt;/td&gt;
&lt;td&gt;Assistant model&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;3. RLHF (optional)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Reinforcement Learning from Human Feedback - humans compare model responses to improve quality&lt;/td&gt;
&lt;td&gt;Human rankings&lt;/td&gt;
&lt;td&gt;Refined assistant model&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is where the “assistant” aspect of AI is formed which is transforming a text generator into a system that aligns better with human intentions and provides more reliable, safe responses.&lt;/p&gt;




&lt;h2&gt;
  
  
  How Developers Can Work With LLMs in Low-Resource Settings
&lt;/h2&gt;

&lt;p&gt;In countries like Zimbabwe and across much of Africa, we may not have the resources to train large models from scratch. However, we can still &lt;strong&gt;leverage open-source models&lt;/strong&gt; and &lt;strong&gt;build locally relevant applications&lt;/strong&gt; on top of them.&lt;/p&gt;

&lt;p&gt;Here are a few approaches:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Use open-source base models&lt;/strong&gt; like &lt;strong&gt;LLaMA 2&lt;/strong&gt;, &lt;strong&gt;Mistral&lt;/strong&gt;, or &lt;strong&gt;Gemma (Google)&lt;/strong&gt; - powerful and freely available.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apply lightweight fine-tuning techniques&lt;/strong&gt; such as &lt;strong&gt;LoRA&lt;/strong&gt; or &lt;strong&gt;QLoRA&lt;/strong&gt; - these allow model customization even on limited hardware (for instance, Google Colab).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrate RAG (Retrieval-Augmented Generation)&lt;/strong&gt; to connect your model to local data sources such as medical PDFs, hospital records, or reports.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Develop Agentic AI systems&lt;/strong&gt; using frameworks like &lt;strong&gt;LangChain&lt;/strong&gt; or &lt;strong&gt;LlamaIndex&lt;/strong&gt;, enabling models to reason, plan, and use external tools.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy efficiently&lt;/strong&gt; on platforms like &lt;strong&gt;Google Gemini API&lt;/strong&gt;, &lt;strong&gt;Ollama&lt;/strong&gt;, or &lt;strong&gt;Hugging Face Spaces&lt;/strong&gt;, which are affordable and easy to set up.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This bottom-up approach makes AI development more inclusive and practical - precisely what we need to strengthen African innovation ecosystems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why LLMs Matter for Low-Resource Healthcare
&lt;/h2&gt;

&lt;p&gt;In regions where healthcare systems face resource constraints, &lt;strong&gt;Generative AI can augment healthcare professionals&lt;/strong&gt; in meaningful ways.&lt;br&gt;&lt;br&gt;
For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;clinic assistant&lt;/strong&gt; that summarizes patient data from SMS or WhatsApp messages.
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;medication adherence system&lt;/strong&gt; that sends reminders via USSD in local languages.
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;radiology support tool&lt;/strong&gt; that triages scans using open AI models before human review.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not distant ideas - they are achievable today with creative design, smaller models, and hybrid architectures that combine &lt;strong&gt;RAG and Agentic reasoning&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Challenge: Safety, Security, and Bias
&lt;/h2&gt;

&lt;p&gt;As powerful as they are, LLMs come with challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Jailbreaks&lt;/strong&gt; and &lt;strong&gt;prompt injection attacks&lt;/strong&gt; that can bypass safety filters.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data poisoning&lt;/strong&gt; and embedded &lt;strong&gt;biases&lt;/strong&gt; from low-quality datasets.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy risks&lt;/strong&gt;, especially in sensitive domains like healthcare.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Developers and researchers must therefore emphasize &lt;strong&gt;ethical design, data protection, and responsible deployment&lt;/strong&gt;, especially in clinical or community-focused AI systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  My Learning Journey and Invitation
&lt;/h2&gt;

&lt;p&gt;I am currently learning through open materials, practical projects, and community collaborations, focusing on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LLM Engineering&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agentic AI Systems&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI in Healthcare for Low-Resource Settings&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I plan to continue sharing my findings, including both successes and mistakes as I explore this field further.&lt;br&gt;&lt;br&gt;
If you are a researcher, developer, or enthusiast working on similar topics, I would love to &lt;strong&gt;connect, exchange ideas, or collaborate&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  Building the Future Together
&lt;/h3&gt;

&lt;p&gt;LLMs are no longer just chat tools; they are becoming the &lt;strong&gt;core operating systems&lt;/strong&gt; for the next generation of intelligent applications.&lt;br&gt;&lt;br&gt;
In low-resource settings, they present a unique opportunity to &lt;strong&gt;leapfrog traditional infrastructure&lt;/strong&gt; and create tools that directly serve local needs.&lt;/p&gt;

&lt;p&gt;If you have worked with &lt;strong&gt;RAG&lt;/strong&gt;, &lt;strong&gt;fine-tuning&lt;/strong&gt;, or &lt;strong&gt;Agentic AI&lt;/strong&gt; in similar environments, please share your thoughts or feedback below.&lt;br&gt;&lt;br&gt;
I am still learning, and your insights will help refine my understanding and future explorations.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Author:&lt;/strong&gt; Ronnie Kakunguwo&lt;br&gt;&lt;br&gt;
&lt;em&gt;Biomedical Engineer | AI Research Explorer | Focused on Accessible Innovation in Africa&lt;/em&gt;&lt;/p&gt;

</description>
      <category>llmengineering</category>
      <category>genai</category>
      <category>aiinhealthcare</category>
      <category>africatech</category>
    </item>
  </channel>
</rss>
