<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Elisha</title>
    <description>The latest articles on DEV Community by Elisha (@loktioncode).</description>
    <link>https://dev.to/loktioncode</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/loktioncode"/>
    <language>en</language>
    <item>
      <title>What Gemma 4 Means for Africa</title>
      <dc:creator>Elisha</dc:creator>
      <pubDate>Thu, 14 May 2026 10:53:50 +0000</pubDate>
      <link>https://dev.to/loktioncode/what-gemma-4-means-for-africa-12el</link>
      <guid>https://dev.to/loktioncode/what-gemma-4-means-for-africa-12el</guid>
      <description>&lt;p&gt;Most conversations about AI assume the internet is always available.&lt;/p&gt;

&lt;p&gt;But in many parts of Africa, that assumption breaks immediately.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  A farmer in a remote area may not have stable connectivity.
&lt;/li&gt;
&lt;li&gt;  A rural clinic may experience internet outages for days.
&lt;/li&gt;
&lt;li&gt;  A school may only have access to low-power devices and limited bandwidth.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Yet these are exactly the places where AI could have the biggest impact. That is why local AI matters and why Gemma 4 feels important. Not just because it is powerful, but because it represents something the AI industry desperately needs more of:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI designed to run closer to the real world, rather than exclusively in the cloud.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Real Systems, Real Constraints
&lt;/h2&gt;

&lt;p&gt;Over the past few years, I have been building systems across agriculture, healthcare, UAVs, IoT, and offline AI infrastructure. One thing becomes obvious very quickly: &lt;strong&gt;Cloud-only AI does not always work for African environments.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sometimes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Internet costs are too high.&lt;/li&gt;
&lt;li&gt;Latency is too unreliable.&lt;/li&gt;
&lt;li&gt;Infrastructure simply is not available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of asking, &lt;em&gt;"How large can the model become?" we must start asking, "Can this still help someone when connectivity disappears?"&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Gemma 4 Feels Different
&lt;/h3&gt;

&lt;p&gt;What stands out to me about Gemma 4 is the balance between capability and deployability. The model family spans multiple sizes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Lightweight edge-friendly models&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dense models&lt;/strong&gt; for stronger reasoning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mixture-of-Experts (MoE)&lt;/strong&gt; architectures for scalable performance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This flexibility matters because not every AI problem should require a server rack. Sometimes you need a model that can run on a Raspberry Pi, a mobile phone, a drone, or a low-power clinic workstation.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/google" rel="noopener noreferrer"&gt;
        google
      &lt;/a&gt; / &lt;a href="https://github.com/google/gemma_pytorch" rel="noopener noreferrer"&gt;
        gemma_pytorch
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      The official PyTorch implementation of Google's Gemma models
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Gemma in PyTorch&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Gemma&lt;/strong&gt; is a family of lightweight, state-of-the art open models built from research and technology used to create Google Gemini models. They include both text-only and multimodal decoder-only large language models, with open weights, pre-trained variants, and instruction-tuned variants. For more details, please check out the following links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ai.google.dev/gemma" rel="nofollow noopener noreferrer"&gt;Gemma on Google AI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.kaggle.com/models/google/gemma-3" rel="nofollow noopener noreferrer"&gt;Gemma on Kaggle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pantheon.corp.google.com/vertex-ai/publishers/google/model-garden/gemma3" rel="nofollow noopener noreferrer"&gt;Gemma on Vertex AI Model Garden&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the official PyTorch implementation of Gemma models. We provide model and inference implementations using both PyTorch and PyTorch/XLA, and support running inference on CPU, GPU and TPU.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Updates&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;[March 12th, 2025 🔥] Support Gemma v3. You can find the checkpoints &lt;a href="https://www.kaggle.com/models/google/gemma-3/pytorch" rel="nofollow noopener noreferrer"&gt;on Kaggle&lt;/a&gt; and &lt;a href="https://huggingface.co/models?other=gemma_torch" rel="nofollow noopener noreferrer"&gt;Hugging Face&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[June 26th, 2024] Support Gemma v2. You can find the checkpoints &lt;a href="https://www.kaggle.com/models/google/gemma-2/pytorch" rel="nofollow noopener noreferrer"&gt;on Kaggle&lt;/a&gt; and Hugging Face&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[April 9th, 2024] Support CodeGemma. You can find the checkpoints &lt;a href="https://www.kaggle.com/models/google/codegemma/pytorch" rel="nofollow noopener noreferrer"&gt;on Kaggle&lt;/a&gt; and &lt;a href="https://huggingface.co/collections/google/codegemma-release-66152ac7b683e2667abdee11" rel="nofollow noopener noreferrer"&gt;Hugging Face&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;[April 5, 2024] Support Gemma v1.1. You can…&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/google/gemma_pytorch" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
 


&lt;h2&gt;
  
  
  Agriculture Is One of the Biggest Opportunities
&lt;/h2&gt;

&lt;p&gt;Agriculture remains a pillar of African industry, yet many farmers lack access to advanced decision support. This is where edge AI becomes powerful.&lt;/p&gt;

&lt;p&gt;I have worked on concepts involving:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Crop disease detection&lt;/li&gt;
&lt;li&gt;Smart farm monitoring&lt;/li&gt;
&lt;li&gt;UAV-based crop analysis&lt;/li&gt;
&lt;li&gt;AI-assisted agricultural guidance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Imagine a farmer capturing an image of a diseased crop and receiving offline analysis with natural language follow-up questions immediately. By moving intelligence directly into the field, we remove the dependency on stable internet.&lt;/p&gt;




&lt;h2&gt;
  
  
  Rural Healthcare Could Benefit Massively
&lt;/h2&gt;

&lt;p&gt;In rural clinics where staffing is limited and connectivity is unreliable, a lightweight Gemma 4 deployment could assist with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Patient note summarization&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multilingual communication&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Symptom intake&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Medical documentation&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Privacy and resilience are the key here. If a clinic loses internet access, the system should not stop functioning. Local inference ensures that support tools remain available 24/7.&lt;/p&gt;




&lt;h2&gt;
  
  
  Education and STEM Learning
&lt;/h2&gt;

&lt;p&gt;Many students in underserved communities face limited internet and outdated hardware. A lightweight model running on low-cost Android devices could help students learn mathematics and science through natural interaction without needing a data plan. This levels the playing field for developers building educational tools tailored for local environments.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Open Models Matter for Africa
&lt;/h2&gt;

&lt;p&gt;Open models allow developers to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Deploy locally&lt;/strong&gt; to reduce operational costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preserve privacy&lt;/strong&gt; by keeping data on-device.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build resilience&lt;/strong&gt; against connectivity issues.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Innovation is no longer limited to organizations with massive cloud budgets. It allows smaller teams, startups, and independent researchers in Africa to solve local problems directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of AI Cannot Be Cloud-Only
&lt;/h2&gt;

&lt;p&gt;The future of AI will not only be built inside datacenters. It will be built on farms, in classrooms, in drones, and in community clinics. Models like Gemma 4 lower the barrier between AI capability and real-world deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The future of AI should work everywhere, not just where the internet is strongest.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What do you think?
&lt;/h3&gt;

&lt;p&gt;I'm curious to hear from other developers building for resource-constrained environments.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  What is the biggest hurdle you face when deploying AI models offline?&lt;/li&gt;
&lt;li&gt;  Do you think the "MoE" (Mixture of Experts) approach is viable for edge devices, or should we stick to smaller, dense models?&lt;/li&gt;
&lt;li&gt;  Have you experimented with Gemma 4 in a non-cloud environment yet?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Let’s discuss in the comments!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>gemmachallenge</category>
      <category>gemma</category>
      <category>ai</category>
    </item>
    <item>
      <title>Code Reviews using Github Team</title>
      <dc:creator>Elisha</dc:creator>
      <pubDate>Fri, 04 Sep 2020 11:14:38 +0000</pubDate>
      <link>https://dev.to/loktioncode/code-reviews-using-github-team-3bln</link>
      <guid>https://dev.to/loktioncode/code-reviews-using-github-team-3bln</guid>
      <description>&lt;p&gt;Before we even get started I know someone is thinking, " Why should I bother with doing code reviews?". Well, it is good practice to do code reviews, it helps give a fresh set of eyes to identify bugs and simple coding errors before your product gets to the next step, making the process for getting the software to the customer more efficient. Simply reviewing someone's code and identifying errors is great. To do this we use branches on Github.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I have an article that can help developers use Github Team for doing code reviews&lt;/strong&gt;.&lt;a href="https://loktioncode.hashnode.dev/code-reviews-using-github"&gt;You can find it here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>github</category>
      <category>codequality</category>
    </item>
  </channel>
</rss>
