<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Awa Destiny Aghangu</title>
    <description>The latest articles on DEV Community by Awa Destiny Aghangu (@awa_destiny).</description>
    <link>https://dev.to/awa_destiny</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/awa_destiny"/>
    <language>en</language>
    <item>
      <title>How to Get Your First Accepted Contribution in Outreachy</title>
      <dc:creator>Awa Destiny Aghangu</dc:creator>
      <pubDate>Tue, 14 Apr 2026 12:31:47 +0000</pubDate>
      <link>https://dev.to/awa_destiny/how-to-get-your-first-accepted-contribution-in-outreachy-na2</link>
      <guid>https://dev.to/awa_destiny/how-to-get-your-first-accepted-contribution-in-outreachy-na2</guid>
      <description>&lt;p&gt;If you are reading this, you are probably in the same place I was a few weeks ago, staring at the Outreachy website and trying to figure out where to actually start. The program page is helpful, but there is a gap between reading the docs and knowing what to do on a Tuesday afternoon when you are trying to get a contribution in.&lt;/p&gt;

&lt;p&gt;This guide covers what I wish I had known earlier.&lt;/p&gt;




&lt;h2&gt;
  
  
  Before the Application Phase Opens
&lt;/h2&gt;

&lt;p&gt;Most people start thinking about Outreachy only after the application window opens. That is a mistake. By the time the portal is live, applicants who did their homework earlier are already moving faster.&lt;/p&gt;

&lt;p&gt;Here is what you should do in the weeks before:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read the eligibility rules carefully.&lt;/strong&gt; The Outreachy eligibility requirements are specific about work hours, student status, and residency. Do not assume you qualify. Check the &lt;a href="https://www.outreachy.org/docs/applicant/#eligibility" rel="noopener noreferrer"&gt;eligibility page&lt;/a&gt; before anything else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Browse the previous round's projects.&lt;/strong&gt; Outreachy publishes the list of past participating organizations. Go through them. This gives you a realistic picture of what projects show up, what skills are needed, and which communities you could actually contribute to. You will also start recognizing org names when the new round opens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set up accounts early.&lt;/strong&gt; Several contributing communities require accounts that take time to activate. Fedora, for example, uses FAS (Fedora Accounts System). Create it before you need it. Same applies to Matrix, IRC, mailing lists, or anything else a community uses for communication. These are not the kinds of things you want to be troubleshooting the same day you are trying to submit a contribution.&lt;/p&gt;




&lt;h2&gt;
  
  
  When the Application Phase Opens
&lt;/h2&gt;

&lt;p&gt;The initial application has two parts: the eligibility check and the essay questions. Get the eligibility check done first. If something disqualifies you, you want to know before you spend time on essays.&lt;/p&gt;

&lt;p&gt;For the essays, answer honestly. Outreachy mentors read a lot of these. Generic answers about passion for open source are not memorable. Write about what you actually did, what you actually struggled with, and what you actually want to work on.&lt;/p&gt;

&lt;p&gt;Once your initial application is approved, you move into the contribution phase. This is where most of the real work happens.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Pick a Project
&lt;/h2&gt;

&lt;p&gt;You are allowed to contribute to more than one project. Do not try to contribute to five. Pick one or two that are a realistic fit for your current skills and available time.&lt;/p&gt;

&lt;p&gt;When evaluating a project, look at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The contribution requirements.&lt;/strong&gt; Some projects list specific tasks applicants must complete. Others are more open. Know which type you are dealing with before you start.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mentor responsiveness.&lt;/strong&gt; Before committing time to a project, send one message to the mentor or drop a line in the community chat. See how long it takes to get a reply. A mentor who is unresponsive during the application phase will likely be unresponsive during the internship.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The codebase or documentation state.&lt;/strong&gt; Clone the repo. Read the README. If you cannot get a development environment running after a reasonable effort and there is no help available, that is a signal worth paying attention to.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Making Your First Contribution
&lt;/h2&gt;

&lt;p&gt;This is where people get stuck. The task feels either too small or too large.&lt;/p&gt;

&lt;p&gt;A few things that help:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read the contribution guide if one exists.&lt;/strong&gt; Many projects have a &lt;code&gt;CONTRIBUTING.md&lt;/code&gt; or equivalent. Read it fully before touching anything. It will tell you how to format commits, whether there is a PR template, and what review turnaround looks like.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start with what is explicitly listed.&lt;/strong&gt; If the project has a list of applicant tasks or labeled issues like &lt;code&gt;good-first-issue&lt;/code&gt;, start there. This shows the mentor that you can follow instructions, which is a real and valued skill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask before you guess.&lt;/strong&gt; If you are not sure whether an approach is correct, ask in the community channel before you build it. A short question saves you from spending three hours on something the mentor will ask you to redo.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Record your contribution.&lt;/strong&gt; In Outreachy, you must log contributions in the application portal. Do this as you go. Do not wait until the last day of the contribution phase and try to reconstruct everything from memory.&lt;/p&gt;




&lt;h2&gt;
  
  
  Communication Habits That Actually Matter
&lt;/h2&gt;

&lt;p&gt;Open source communities operate mostly in public, async channels. How you communicate there is part of your contribution record, even without a code commit attached to it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ask questions in public channels when possible.&lt;/strong&gt; If you message a mentor privately with a question that other applicants might have, you are creating a private silo of information. Asking in the public channel also shows engagement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be specific when you ask for help.&lt;/strong&gt; "It doesn't work" is not a useful bug report. Say what you tried, what you expected, and what actually happened.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Acknowledge responses.&lt;/strong&gt; If a mentor reviews your work and leaves feedback, respond, even if just to confirm you saw it and are working on the changes.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Final Application
&lt;/h2&gt;

&lt;p&gt;The final application is submitted before the contribution phase ends. Do not leave this until the last few hours.&lt;/p&gt;

&lt;p&gt;The most important section is the timeline. You will be asked to break down the internship project into weekly tasks. This shows that you understand the scope of the work. If the project proposal describes a vague outcome, ask your mentor to help you make it concrete before you write your timeline.&lt;/p&gt;

&lt;p&gt;Write your timeline in enough detail that someone who does not know the project could understand what you plan to do each week. Mentors use this to evaluate whether you have thought seriously about the work.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Thing That Is Easy to Overlook
&lt;/h2&gt;

&lt;p&gt;Keep a short running log of what you do each day during the contribution phase. It does not need to be formal. Even a few bullet points in a text file is fine. When it comes time to fill out the contribution log in the portal, or write about your experience, this log is invaluable. Memory is not reliable after three weeks of parallel tasks.&lt;/p&gt;




&lt;p&gt;That is the guide. If you are currently in the contribution phase and something here contradicts what your specific project requires, go with what your project documentation and mentor say. Every community has its own quirks. But the fundamentals above apply broadly.&lt;/p&gt;

&lt;p&gt;Good luck.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>career</category>
      <category>opensource</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Getting Started with RamaLama on Fedora</title>
      <dc:creator>Awa Destiny Aghangu</dc:creator>
      <pubDate>Wed, 01 Apr 2026 10:31:49 +0000</pubDate>
      <link>https://dev.to/awa_destiny/getting-started-with-ramalama-on-fedora-1nn8</link>
      <guid>https://dev.to/awa_destiny/getting-started-with-ramalama-on-fedora-1nn8</guid>
      <description>&lt;p&gt;RamaLama is an open-source tool built under the &lt;a href="https://github.com/containers" rel="noopener noreferrer"&gt;containers&lt;/a&gt; organization that makes running AI models locally as straightforward as working with containers. The goal is to make AI inference boring and predictable. RamaLama handles host configuration by pulling an &lt;a href="https://opencontainers.org/" rel="noopener noreferrer"&gt;OCI (Open Container Initiative)&lt;/a&gt; container image tuned to the hardware it detects on your system, so you skip the manual dependency setup entirely.&lt;/p&gt;

&lt;p&gt;If you already work with Podman or Docker, the mental model is familiar. Models are pulled, listed, and removed much like container images.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before installing RamaLama, make sure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Fedora system (this guide uses Fedora with &lt;code&gt;dnf&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://podman.io/" rel="noopener noreferrer"&gt;Podman&lt;/a&gt; installed, RamaLama uses it as the default container engine&lt;/li&gt;
&lt;li&gt;Sufficient disk space for model storage (models range from ~2GB to 10GB+)&lt;/li&gt;
&lt;li&gt;At least 8GB RAM for smaller models; 16GB+ recommended for 7B+ parameter models&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;On Fedora, RamaLama is available directly from the default repositories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;dnf &lt;span class="nb"&gt;install &lt;/span&gt;ramalama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, verify the version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ramalama version x.x.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;On first run, RamaLama inspects your system for GPU support and falls back to CPU if no GPU is found. It then pulls the appropriate OCI container image with all the inference dependencies baked in, including &lt;a href="https://github.com/ggml-org/llama.cpp" rel="noopener noreferrer"&gt;&lt;code&gt;llama.cpp&lt;/code&gt;&lt;/a&gt;, which powers the model execution layer. Models are stored locally and reused across runs, so the pull only happens once per model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model Registries
&lt;/h2&gt;

&lt;p&gt;RamaLama supports pulling models from multiple registries. The default registry is &lt;a href="https://ollama.com/library" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt;, but you can reference models from any supported source using a transport prefix:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Registry&lt;/th&gt;
&lt;th&gt;Prefix&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://ollama.com/library" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;ollama://&lt;/code&gt; or no prefix&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://huggingface.co/models" rel="noopener noreferrer"&gt;Hugging Face&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;huggingface://&lt;/code&gt; or &lt;code&gt;hf://&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://modelscope.cn/" rel="noopener noreferrer"&gt;ModelScope&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;modelscope://&lt;/code&gt; or &lt;code&gt;ms://&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://opencontainers.org" rel="noopener noreferrer"&gt;OCI Registries&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;oci://&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://registry.ramalama.com" rel="noopener noreferrer"&gt;RamaLama Registry&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;rlcr://&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Direct URL&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;https://&lt;/code&gt;, &lt;code&gt;http://&lt;/code&gt;, &lt;code&gt;file://&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Pulling and Running Models
&lt;/h2&gt;

&lt;h3&gt;
  
  
  From Ollama (default)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama run granite3.1-moe:3b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This pulls the &lt;code&gt;granite3.1-moe:3b&lt;/code&gt; model from the Ollama registry and drops you into an interactive chat session. On first run, the model is downloaded to local storage; subsequent runs reuse it.&lt;/p&gt;

&lt;h3&gt;
  
  
  From Hugging Face
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama run huggingface://MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Some newer Hugging Face models may fail with a &lt;code&gt;gguf_init_from_file_impl: failed to read magic&lt;/code&gt; error due to format incompatibilities with &lt;code&gt;llama.cpp&lt;/code&gt;. When that happens, look for a pre-converted GGUF version of the same model on Hugging Face by searching the model name with "GGUF" appended. In this case, &lt;code&gt;MaziyarPanahi/Mistral-7B-Instruct-v0.3-GGUF&lt;/code&gt; worked as a compatible alternative.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Useful Flags
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Set Context Window Size; &lt;code&gt;--ctx-size&lt;/code&gt; / &lt;code&gt;-c&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;By default, RamaLama does not override the model's native context length. For &lt;code&gt;llama3.1:8b&lt;/code&gt;, that default is &lt;code&gt;131072&lt;/code&gt; tokens, which requires ~16GB of KV cache allocation, well above what most dev machines can handle.&lt;/p&gt;

&lt;p&gt;Use the &lt;code&gt;-c&lt;/code&gt; flag to cap the context size:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama run &lt;span class="nt"&gt;-c&lt;/span&gt; 16384 llama3.1:8b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A context size of &lt;code&gt;16384&lt;/code&gt; tokens requires ~2GB of KV cache for &lt;code&gt;llama3.1:8b&lt;/code&gt;. You can use the &lt;a href="https://lmcache.ai/kv_cache_calculator.html" rel="noopener noreferrer"&gt;KV Cache Size Calculator&lt;/a&gt; to find the right value for your available memory and target model. On memory-constrained machines, this flag is really helpful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Set Temperature; &lt;code&gt;--temp&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Temperature controls the randomness of the model's output. The default is typically around &lt;code&gt;0.8&lt;/code&gt;. Setting it to &lt;code&gt;0&lt;/code&gt; makes the model more deterministic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama run &lt;span class="nt"&gt;--temp&lt;/span&gt; 0 granite3.1-moe:3b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A temperature of &lt;code&gt;0&lt;/code&gt; is useful for factual Q&amp;amp;A or benchmarking where you want consistent, reproducible outputs. Keep in mind it reduces randomness, not hallucination. If the knowledge is absent from the model's training data, &lt;code&gt;--temp 0&lt;/code&gt; will just make it consistently wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  Select Inference Backend; &lt;code&gt;--backend&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;RamaLama auto-detects the best backend for your hardware, but you can override it explicitly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama run &lt;span class="nt"&gt;--backend&lt;/span&gt; vulkan granite3.1-moe:3b   &lt;span class="c"&gt;# AMD/Intel or CPU fallback&lt;/span&gt;
ramalama run &lt;span class="nt"&gt;--backend&lt;/span&gt; cuda granite3.1-moe:3b     &lt;span class="c"&gt;# NVIDIA&lt;/span&gt;
ramalama run &lt;span class="nt"&gt;--backend&lt;/span&gt; rocm granite3.1-moe:3b     &lt;span class="c"&gt;# AMD ROCm&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On systems without a GPU, RamaLama falls back to CPU inference automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enable Debug Output; &lt;code&gt;--debug&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;--debug&lt;/code&gt; is a global flag and must be placed before the subcommand:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama &lt;span class="nt"&gt;--debug&lt;/span&gt; run granite3.1-moe:3b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prints the underlying container commands RamaLama executes, hardware detection steps, and registry fetch details. Useful when troubleshooting model compatibility issues, unexpected behavior, or hardware detection problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Models
&lt;/h2&gt;

&lt;p&gt;List locally stored models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pull a model without running it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama pull llama3.1:8b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remove a model from local storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama &lt;span class="nb"&gt;rm &lt;/span&gt;llama3.1:8b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Serving a Model as an API
&lt;/h2&gt;

&lt;p&gt;RamaLama can expose a model as an OpenAI-compatible REST endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama serve granite3.1-moe:3b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This starts a local server on port &lt;code&gt;8080&lt;/code&gt; by default. You can point any OpenAI-compatible client at it without changing how those clients are written. Useful for integrating a local model into applications, RAG pipelines, or tooling like LangChain and LlamaIndex.&lt;/p&gt;

&lt;h3&gt;
  
  
  Web UI
&lt;/h3&gt;

&lt;p&gt;When running &lt;code&gt;ramalama serve&lt;/code&gt;, a browser-based chat interface is available at &lt;code&gt;http://localhost:8080&lt;/code&gt; by default. To disable it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ramalama serve &lt;span class="nt"&gt;--webui&lt;/span&gt; off granite3.1-moe:3b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The web UI is powered by the &lt;code&gt;llama.cpp&lt;/code&gt; HTTP server's built-in interface and gives you a quick way to interact with the model without writing any client code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to Watch Out For
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model format compatibility:&lt;/strong&gt; Some Hugging Face models require a pre-converted GGUF version to work with RamaLama. Stick to GGUF-format models when in doubt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory and context size:&lt;/strong&gt; Always check the model's default context length before running on a memory-constrained machine. Use &lt;code&gt;-c&lt;/code&gt; to cap it appropriately.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model size vs. accuracy:&lt;/strong&gt; Smaller models (3B) are fast and lightweight but may lack knowledge on niche topics. For factual accuracy, 7B+ models are noticeably more reliable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;--debug&lt;/code&gt; flag placement:&lt;/strong&gt; It must come before the subcommand, i.e. &lt;code&gt;ramalama --debug run&lt;/code&gt; not &lt;code&gt;ramalama run --debug&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RamaLama is still in active development:&lt;/strong&gt; The project moves fast. Flag names, behaviors, and supported features can change between versions. When in doubt, check &lt;code&gt;ramalama --help&lt;/code&gt; or the official docs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ramalama.ai/docs/introduction" rel="noopener noreferrer"&gt;RamaLama Official Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/containers/ramalama" rel="noopener noreferrer"&gt;RamaLama GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.ramalama.com" rel="noopener noreferrer"&gt;RamaLama Blog&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>linux</category>
      <category>tooling</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Getting Started with Docling: PDF to Structured Data</title>
      <dc:creator>Awa Destiny Aghangu</dc:creator>
      <pubDate>Thu, 26 Mar 2026 13:04:48 +0000</pubDate>
      <link>https://dev.to/awa_destiny/getting-started-with-docling-pdf-to-structured-data-108o</link>
      <guid>https://dev.to/awa_destiny/getting-started-with-docling-pdf-to-structured-data-108o</guid>
      <description>&lt;p&gt;&lt;a href="https://www.docling.ai/" rel="noopener noreferrer"&gt;Docling&lt;/a&gt; is an open-source document conversion tool from &lt;a href="https://research.ibm.com/blog/docling-generative-AI" rel="noopener noreferrer"&gt;IBM Research&lt;/a&gt;. It takes PDFs and converts them into clean, structured output like Markdown, HTML, JSON, or plain text. It handles layout analysis, table extraction, image embedding, OCR, and even a vision-based pipeline for complex documents.&lt;/p&gt;

&lt;p&gt;This guide walks through installation, the core conversion options, and the advanced flags worth knowing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Use a virtual environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; venv .venv
&lt;span class="nb"&gt;source&lt;/span&gt; .venv/bin/activate  &lt;span class="c"&gt;# Windows: .venv\Scripts\activate&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;docling
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docling &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;span class="c"&gt;# Should output: Docling version: 2.xx.x&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Basic Conversion
&lt;/h2&gt;

&lt;p&gt;Docling accepts both local file paths and remote URLs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docling https://example.com/document.pdf
docling ./my-report.pdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Default output is Markdown, written to your current directory. For a typical document, expect around two minutes and minimal resource usage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Output Formats
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Markdown (default)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docling file.pdf
&lt;span class="c"&gt;# or explicitly&lt;/span&gt;
docling file.pdf &lt;span class="nt"&gt;--to&lt;/span&gt; md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Text, headings, tables, and images are all preserved. Images are embedded as base64 data URIs. This is a really useful format for a lot of data pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  HTML
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docling file.pdf &lt;span class="nt"&gt;--to&lt;/span&gt; html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same extracted content wrapped in HTML with basic browser styling. Useful for human-readable web viewing. The underlying extraction is identical to the Markdown output but only the presentation layer changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  JSON
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docling file.pdf &lt;span class="nt"&gt;--to&lt;/span&gt; json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every element; heading, paragraph, table, image becomes a structured node with semantic metadata. Use this when you need programmatic access to document structure, not just raw text.&lt;/p&gt;

&lt;h3&gt;
  
  
  Plain Text
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docling file.pdf &lt;span class="nt"&gt;--to&lt;/span&gt; text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All structure stripped. Images become &lt;code&gt;&amp;lt;!-- image --&amp;gt;&lt;/code&gt; placeholders. Useful only when you need raw text and nothing else.&lt;/p&gt;




&lt;h2&gt;
  
  
  Advanced Options
&lt;/h2&gt;

&lt;h3&gt;
  
  
  VLM Pipeline
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docling &lt;span class="nt"&gt;--pipeline&lt;/span&gt; vlm &lt;span class="nt"&gt;--vlm-model&lt;/span&gt; granite_docling file.pdf &lt;span class="nt"&gt;--output&lt;/span&gt; vlm/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The standard pipeline reads the text layer of the PDF. The VLM (Vision Language Model) pipeline processes the document visually, the way a human would read it. This matters in a few specific situations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image-based pages&lt;/strong&gt;: Cover pages or sections built entirely from images have no text layer for the standard pipeline to read. The VLM pipeline recovers them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hidden text artifacts&lt;/strong&gt;: Old revisions sometimes leave hidden text beneath visible content. The standard pipeline surfaces both strings. The VLM pipeline reads what's visually rendered, so the artifact doesn't appear.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex layouts&lt;/strong&gt;: Overall structure and layout understanding are noticeably better.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The trade-offs are real though. The VLM pipeline takes significantly more time and is resource (CPU/GPU/RAM) intensive compared to the standard pipeline. It also has its own failure modes some unicode symbols like &lt;code&gt;✔&lt;/code&gt; that the standard pipeline captures correctly may be replaced with approximate text like &lt;code&gt;(in-place)&lt;/code&gt;, and some passages may repeat in the output.&lt;/p&gt;

&lt;p&gt;Use the VLM pipeline when accuracy matters more than speed. For bulk processing, stick with the standard pipeline unless you have the resources for builk VLM pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disabling OCR
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docling file.pdf &lt;span class="nt"&gt;--no-ocr&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For PDFs with a proper text layer (digitally created documents), disabling OCR has no effect on output quality and shaves off a little processing time. For scanned documents, disabling OCR means text in images won't be extracted at all.&lt;/p&gt;

&lt;h3&gt;
  
  
  Referenced Image Export
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docling file.pdf &lt;span class="nt"&gt;--image-export-mode&lt;/span&gt; referenced &lt;span class="nt"&gt;--output&lt;/span&gt; out/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, images are embedded as base64 in the output file, which keeps everything self-contained but produces large files. With &lt;code&gt;referenced&lt;/code&gt;, images are written as separate files and the Markdown links to them by path. Use this when images need to be processed independently or when a smaller output file is preferred.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disabling Table Structure Recovery
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docling file.pdf &lt;span class="nt"&gt;--no-tables&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Table content is still extracted, but instead of a proper Markdown table with rows and columns, everything collapses into a single cell. Useful if you are processing in bulk and handling table structure downstream.&lt;/p&gt;




&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/docling-project/docling" rel="noopener noreferrer"&gt;Docling on GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docling-project.github.io/docling/" rel="noopener noreferrer"&gt;Official Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Outreachy Contribution Portfolio</title>
      <dc:creator>Awa Destiny Aghangu</dc:creator>
      <pubDate>Wed, 25 Mar 2026 10:42:20 +0000</pubDate>
      <link>https://dev.to/awa_destiny/outreachy-contribution-portfolio-1ak2</link>
      <guid>https://dev.to/awa_destiny/outreachy-contribution-portfolio-1ak2</guid>
      <description>&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Develop a SLM/LLM using RamaLama RAG based off Fedora RPM Packaging Guidelines&lt;br&gt;
&lt;strong&gt;Phase:&lt;/strong&gt; Application - March - April 2026&lt;br&gt;
&lt;strong&gt;Candidate:&lt;/strong&gt; Awa Destiny Aghangu&lt;br&gt;
&lt;strong&gt;Email:&lt;/strong&gt; &lt;a href="mailto:awadestinyaghangu@gmail.com"&gt;awadestinyaghangu@gmail.com&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Matrix Chat:&lt;/strong&gt; &lt;a href="https://matrix.to/#/@montana-d:fedora.im?web-instance[element.io]=chat.fedoraproject.org" rel="noopener noreferrer"&gt;@montana-d:fedora.im&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;FAS Profile:&lt;/strong&gt; &lt;a href="https://accounts.fedoraproject.org/user/montana-d/" rel="noopener noreferrer"&gt;@montana-d&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Community: Introduce Yourself
&lt;/h2&gt;

&lt;p&gt;Posted an introduction on the Fedora Project discussion forum as part of project onboarding.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Forum thread: &lt;a href="https://discussion.fedoraproject.org/t/outreachy-2026-develop-a-slm-llm-using-ramalama-rag-introduce-yourself/184062/8" rel="noopener noreferrer"&gt;Outreachy 2026 - Develop a SLM/LLM using RamaLama RAG: Introduce Yourself&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 1: Complete FAS Profile
&lt;/h2&gt;

&lt;p&gt;Set up and completed the Fedora Account System profile. Includes signing the FPCA, adding pronouns, timezone, and chat nickname, linking external accounts, and joining the Fedora Matrix mentoring channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forge.fedoraproject.org/commops/interns/issues/116#issuecomment-591765" rel="noopener noreferrer"&gt;forge.fedoraproject.org/commops/interns/issues/116&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; Complete - Mar 24, 2026&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 2: Set Up a Personal Blog
&lt;/h2&gt;

&lt;p&gt;Created a public blog on dev.to as a dedicated space for writing during the Outreachy application phase, making contributions visible and accessible without requiring an account to read.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forge.fedoraproject.org/commops/interns/issues/117#issuecomment-591774" rel="noopener noreferrer"&gt;forge.fedoraproject.org/commops/interns/issues/117&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; Complete - Mar 24, 2026&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 3: Write an Intro Blog Post About Fedora
&lt;/h2&gt;

&lt;p&gt;Wrote and published an introductory blog post on the Fedora Project in plain language, covering the Four Foundations, the RPM packaging system, governance structure, and advice for future Outreachy applicants.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forge.fedoraproject.org/commops/interns/issues/118#issuecomment-591814" rel="noopener noreferrer"&gt;forge.fedoraproject.org/commops/interns/issues/118&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Blog post: &lt;a href="https://dev.to/awa_destiny/ive-been-using-fedora-for-years-i-had-no-idea-what-it-actually-was-1nfg"&gt;I've Been Using Fedora for Years. I Had No Idea What It Actually Was&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; Complete - Mar 24, 2026&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 4: Promote Blog Post on Social Media
&lt;/h2&gt;

&lt;p&gt;Promoted the intro blog post on LinkedIn and Mastodon with a personal caption and a clear call to action, demonstrating social media content creation for an open source audience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forge.fedoraproject.org/commops/interns/issues/119#issuecomment-591824" rel="noopener noreferrer"&gt;forge.fedoraproject.org/commops/interns/issues/119&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LinkedIn: &lt;a href="https://www.linkedin.com/posts/aghangu-destiny-awa-846960346_ive-been-using-fedora-for-years-i-had-no-activity-7442249135723921408-QKpQ?utm_source=share&amp;amp;utm_medium=member_desktop&amp;amp;rcm=ACoAAFajdrYBrZnX9CoYyhX6serYfbaiQyZHrOg" rel="noopener noreferrer"&gt;Intro post promotion&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Mastodon: &lt;a href="https://mastodon.social/@awa_destiny/116285401997357800" rel="noopener noreferrer"&gt;Intro post promotion&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; Complete - Mar 24, 2026&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 5: Write an Onboarding Guide for Applicants
&lt;/h2&gt;

&lt;p&gt;Wrote and published a practical guide for future Outreachy applicants covering preparation before the application phase, project selection, making a first contribution, communication habits, and the final application timeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forge.fedoraproject.org/commops/interns/issues/120#issuecomment-615324" rel="noopener noreferrer"&gt;forge.fedoraproject.org/commops/interns/issues/120&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Blog post: &lt;a href="https://dev.to/awa_destiny/how-to-get-your-first-accepted-contribution-in-outreachy-na2"&gt;How to Get Your First Accepted Contribution in Outreachy&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; Complete - Apr 14, 2026&lt;/p&gt;




&lt;h2&gt;
  
  
  Advanced: Prepare a Contributions Portfolio
&lt;/h2&gt;

&lt;p&gt;Created and published a contributions portfolio on dev.to to organize and showcase all Outreachy application phase contributions in a single, publicly accessible page for mentor review.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forge.fedoraproject.org/commops/interns/issues/121#issuecomment-592160" rel="noopener noreferrer"&gt;forge.fedoraproject.org/commops/interns/issues/121&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; Complete - Mar 25, 2026&lt;/p&gt;




&lt;h2&gt;
  
  
  Docling: Explore Document Processing Basics
&lt;/h2&gt;

&lt;p&gt;Explored the Docling CLI to understand its role in the RamaLama RAG pipeline. Installed the tool, parsed complex PDF documents into multiple formats, and analyzed how different command-line flags impact processing speed and output quality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forge.fedoraproject.org/commops/interns/issues/122#issuecomment-593564" rel="noopener noreferrer"&gt;forge.fedoraproject.org/commops/interns/issues/122&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub repo: &lt;a href="https://github.com/Montana-A/docling-basics/tree/main?tab=readme-ov-file#docling-document-processing-exploration" rel="noopener noreferrer"&gt;https://github.com/Montana-A/docling-basics/tree/main?tab=readme-ov-file#docling-document-processing-exploration&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Blog post: &lt;a href="https://dev.to/awa_destiny/getting-started-with-docling-pdf-to-structured-data-108o"&gt;Getting Started with Docling: PDF to Structured Data&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; Complete - Mar 25 – 26, 2026&lt;/p&gt;




&lt;h2&gt;
  
  
  RamaLama: Set Up RamaLama and Explore Basics
&lt;/h2&gt;

&lt;p&gt;Explored RamaLama on Fedora, pulled models across different transports, and ran prompts on each. Documented commands and output, compared the models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forge.fedoraproject.org/commops/interns/issues/124#issuecomment-600559" rel="noopener noreferrer"&gt;forge.fedoraproject.org/commops/interns/issues/124&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub repo: &lt;a href="https://github.com/Montana-A/ramalama-basics" rel="noopener noreferrer"&gt;https://github.com/Montana-A/ramalama-basics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Blog post: &lt;a href="https://dev.to/awa_destiny/getting-started-with-ramalama-on-fedora-1nn8"&gt;Getting Started with RamaLama on Fedora&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; Complete - Mar 30 – Apr 01, 2026&lt;/p&gt;




&lt;h2&gt;
  
  
  Ramalama docs: fix favicon
&lt;/h2&gt;

&lt;p&gt;Updated the Ramalama docs site favicon to match the website favicon, replacing the full logo with the head-only version for visual consistency across both sites.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forge.fedoraproject.org/commops/interns/issues/109#issuecomment-606871" rel="noopener noreferrer"&gt;forge.fedoraproject.org/commops/interns/issues/109&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Github Pull Request: &lt;a href="https://github.com/containers/ramalama/pull/2579" rel="noopener noreferrer"&gt;https://github.com/containers/ramalama/pull/2579&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; Complete - Apr 01 – Apr 03, 2026&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>linux</category>
      <category>llm</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I've Been Using Fedora for Years. I Had No Idea What It Actually Was.</title>
      <dc:creator>Awa Destiny Aghangu</dc:creator>
      <pubDate>Tue, 24 Mar 2026 16:17:04 +0000</pubDate>
      <link>https://dev.to/awa_destiny/ive-been-using-fedora-for-years-i-had-no-idea-what-it-actually-was-1nfg</link>
      <guid>https://dev.to/awa_destiny/ive-been-using-fedora-for-years-i-had-no-idea-what-it-actually-was-1nfg</guid>
      <description>&lt;p&gt;There's a particular kind of embarrassment that comes from realizing you've been using something for a long time without truly understanding it. That's where I am with Fedora right now.&lt;/p&gt;

&lt;p&gt;I've had Fedora Linux installed on machines before. I've &lt;code&gt;dnf install&lt;/code&gt;-ed packages, filed it mentally under "solid Linux distro," and moved on. It wasn't until this week, when I started engaging with the Fedora community through Outreachy, that I realized I had been looking at only the very tip of something much larger.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is the Fedora Project?
&lt;/h3&gt;

&lt;p&gt;Fedora is a community. A global one, sponsored by Red Hat, that builds and maintains free, open source software. Fedora Linux is what most people see, but underneath it there are contributors doing wildly different things: maintaining packages, writing documentation, translating content, building tools, running infrastructure, and governing the whole thing through working groups and committees.&lt;/p&gt;

&lt;p&gt;I didn't fully appreciate this until I tried to join. Creating an account sounds simple until you realize there isn't just one account. There's a Fedora Account System (FAS) account, which is supposed to be the central one, but then there's also a separate account for the wiki, and for the forums, and you start wondering if each Fedora service just quietly has its own login sitting somewhere. It genuinely took me a few confused minutes of clicking around before I understood that FAS is the anchor and most things are supposed to connect back to it. Once I found the right documentation page that explained this, it clicked immediately. The community has clearly answered this question many times before and the resources are there, you just have to land on the right one.&lt;/p&gt;

&lt;p&gt;That small experience actually taught me something about Fedora: it's a large, federated project and some of the seams show. But the community infrastructure to help newcomers exists, and it's patient.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Four Foundations
&lt;/h3&gt;

&lt;p&gt;Fedora's values are organized into what they call the Four Foundations: Freedom, Friends, Features, and First. You can read the full thinking behind each one in &lt;a href="https://docs.fedoraproject.org/en-US/project/#_our_foundations" rel="noopener noreferrer"&gt;Fedora's Foundations documentation&lt;/a&gt;, but here is what I took away as a newcomer reading through them for the first time.&lt;/p&gt;

&lt;p&gt;Freedom means Fedora only ships open source software, no exceptions. That's a real commitment, not a tagline.&lt;/p&gt;

&lt;p&gt;Friends means the community is treated as a first-class part of the project, not just a support mechanism for the software. That one surprised me. I expected technical principles. A Foundation dedicated to people felt unusual, and then it felt right.&lt;/p&gt;

&lt;p&gt;Features means Fedora is deliberately cutting edge. Things break sometimes. Updates come fast. That's not an accident, it's the point.&lt;/p&gt;

&lt;p&gt;First means Fedora is a proving ground. New ideas land here before they spread to other projects and distributions. Contributors are working on what Linux will look like, not just what it looks like now.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Find Interesting
&lt;/h3&gt;

&lt;p&gt;The RPM packaging system. Every package that enters Fedora has to meet a detailed set of guidelines covering file organization, licensing, dependencies, and more. This exists to keep quality consistent across thousands of packages maintained by different people across the world.&lt;/p&gt;

&lt;p&gt;My Outreachy project sits right here. I'll be working with &lt;em&gt;RamaLama&lt;/em&gt; and its &lt;em&gt;Retrieval Augmented Generation (RAG)&lt;/em&gt; capabilities to build a model that can detect deviations from these packaging guidelines automatically. A reviewer working through a submission shouldn't have to hold the entire guidelines document in their head. The model retrieves the relevant rule and flags the issue directly.&lt;/p&gt;

&lt;p&gt;I like that this project uses AI for something concrete and bounded. It's not AI for spectacle. It's making a specific, well-documented process easier to enforce at scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Find Confusing
&lt;/h3&gt;

&lt;p&gt;Governance. There is a Fedora Council. There is FESCo, the Fedora Engineering Steering Committee. There are Special Interest Groups. There are individual maintainers with their own autonomy. I have read the pages that explain all of this and I still couldn't confidently tell you who you'd go to for a specific kind of decision.&lt;/p&gt;

&lt;p&gt;I think this is one of those things that only makes sense once you've watched a few real decisions happen. The structure exists and it works, Fedora has been running for over twenty years. But reading about it cold as a newcomer, it feels like a lot of overlapping authority with unclear edges. I'll probably understand it better in a month than I do right now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advice for Future Outreachy Applicants
&lt;/h3&gt;

&lt;p&gt;The account setup will be slightly confusing at first. FAS is the central account and most things connect to it, but it takes a moment to realize that. Don't let the initial friction discourage you, just look for the official onboarding documentation and it becomes straightforward.&lt;/p&gt;

&lt;p&gt;Don't wait until you feel ready to engage. I almost held back this week because I felt underprepared. That instinct is wrong. The understanding you're waiting to have before you engage is actually built by engaging.&lt;/p&gt;

&lt;p&gt;Use Fedora as an OS if you haven't. There's a real difference between knowing what a project does and having actually used it. It gives you context that no amount of documentation reading will fully replace.&lt;/p&gt;

&lt;p&gt;Ask questions in public. The community is genuinely patient with newcomers, and a question asked in the open helps the next person with the same confusion.&lt;/p&gt;




&lt;p&gt;I've been using Fedora for years. This week I finally started to understand it. That gap between using something and understanding it is, I think, where most genuine learning happens.&lt;/p&gt;

&lt;p&gt;I'm glad Outreachy gave me a reason to close it.&lt;/p&gt;

</description>
      <category>outreachy</category>
      <category>fedora</category>
    </item>
  </channel>
</rss>
