<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ajani luke Kariuki</title>
    <description>The latest articles on DEV Community by Ajani luke Kariuki (@ajani_lukekariuki_79255c).</description>
    <link>https://dev.to/ajani_lukekariuki_79255c</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ajani_lukekariuki_79255c"/>
    <language>en</language>
    <item>
      <title># Connecting Power BI to SQL Databases: A Complete Guide</title>
      <dc:creator>Ajani luke Kariuki</dc:creator>
      <pubDate>Wed, 22 Apr 2026 14:00:11 +0000</pubDate>
      <link>https://dev.to/ajani_lukekariuki_79255c/-connecting-power-bi-to-sql-databases-a-complete-guide-23i0</link>
      <guid>https://dev.to/ajani_lukekariuki_79255c/-connecting-power-bi-to-sql-databases-a-complete-guide-23i0</guid>
      <description>&lt;p&gt;Microsoft Power BI is one of the most widely used business intelligence tools available today. Organizations rely on it to analyze data, monitor performance, and build interactive dashboards that turn raw numbers into actionable insights. Rather than working with static spreadsheets, analysts can connect Power BI to live data sources — ensuring reports stay accurate and up to date in real time.&lt;/p&gt;

&lt;p&gt;SQL databases are a natural partner for Power BI. They store structured data in well-defined tables, support powerful querying operations like filtering, sorting, and aggregation, and underpin most modern data systems. Together, SQL databases handle the data storage and Power BI handles the storytelling.&lt;/p&gt;

&lt;p&gt;This guide walks you through connecting Power BI to both a &lt;strong&gt;local PostgreSQL database&lt;/strong&gt; and a &lt;strong&gt;cloud-hosted database on Aiven&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 1: Connecting Power BI to a Local PostgreSQL Database
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Open Power BI Desktop
&lt;/h3&gt;

&lt;p&gt;Launch Power BI Desktop on your machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Get Data
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the &lt;strong&gt;Home&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Get Data&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;PostgreSQL Database&lt;/strong&gt; from the list&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 3: Enter Connection Details
&lt;/h3&gt;

&lt;p&gt;In the connection dialog, fill in the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Server:&lt;/strong&gt; &lt;code&gt;localhost&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; your database name&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Authenticate
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Database Authentication&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enter your &lt;strong&gt;username&lt;/strong&gt; and &lt;strong&gt;password&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Connect&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 5: Load Your Tables
&lt;/h3&gt;

&lt;p&gt;Once connected, you'll see a navigator pane listing your available tables. Select the ones you need — for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;customers&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;products&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sales&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;inventory&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click &lt;strong&gt;Load&lt;/strong&gt; to bring them into Power BI.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 2: Connecting Power BI to a Cloud Database (Aiven PostgreSQL)
&lt;/h2&gt;

&lt;p&gt;Cloud-hosted databases like those on &lt;strong&gt;Aiven&lt;/strong&gt; follow a similar process, but with a few extra steps to handle remote access and secure connections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install the PostgreSQL ODBC Driver
&lt;/h3&gt;

&lt;p&gt;Before anything else, make sure the PostgreSQL ODBC driver is installed on your machine. Power BI needs this driver to communicate with PostgreSQL.&lt;/p&gt;

&lt;p&gt;Download it from the official PostgreSQL FTP server:&lt;br&gt;
&lt;a href="https://www.postgresql.org/ftp/odbc/versions/" rel="noopener noreferrer"&gt;https://www.postgresql.org/ftp/odbc/versions/&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Open Power BI and Get Data
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Launch &lt;strong&gt;Power BI Desktop&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Go to the &lt;strong&gt;Home&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Get Data&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 3: Select PostgreSQL Database
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Choose &lt;strong&gt;Database → PostgreSQL Database&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Connect&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Enter Your Connection Details
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Server:&lt;/strong&gt; &lt;code&gt;hostname:port&lt;/code&gt; (from your Aiven console)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database:&lt;/strong&gt; your database name&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 5: Authenticate
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Select &lt;strong&gt;Database Authentication&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enter your &lt;strong&gt;username&lt;/strong&gt; and &lt;strong&gt;password&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Connect&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 6: Install the SSL Certificate
&lt;/h3&gt;

&lt;p&gt;For cloud databases, an SSL certificate is required to establish a secure connection. Download the &lt;strong&gt;CA certificate&lt;/strong&gt; from your Aiven project dashboard and configure it in your connection settings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why SSL matters:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Encrypts all data in transit&lt;/li&gt;
&lt;li&gt;Protects your login credentials&lt;/li&gt;
&lt;li&gt;Blocks unauthorized access attempts&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 7: Load Your Data
&lt;/h3&gt;

&lt;p&gt;Once connected, select your tables from the navigator pane (e.g., &lt;code&gt;customers&lt;/code&gt;, &lt;code&gt;products&lt;/code&gt;, &lt;code&gt;sales&lt;/code&gt;, &lt;code&gt;inventory&lt;/code&gt;) and click &lt;strong&gt;Load&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 3: Data Modeling — Creating Relationships
&lt;/h2&gt;

&lt;p&gt;With your tables loaded, head to &lt;strong&gt;Model View&lt;/strong&gt; in Power BI to define relationships between them. This is what allows Power BI to filter and aggregate data correctly across tables.&lt;/p&gt;

&lt;p&gt;Set up the following relationships:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;From Table&lt;/th&gt;
&lt;th&gt;To Table&lt;/th&gt;
&lt;th&gt;Join Key&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;customers&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;sales&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CustomerID&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;products&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;sales&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ProductID&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;products&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;inventory&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ProductID&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Well-defined relationships ensure that slicers, filters, and visuals all work in sync — without them, your reports can produce misleading or incomplete results.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part 4: Building Your Dashboard
&lt;/h2&gt;

&lt;p&gt;Once your data model is in place, you can start building visuals. Here are some recommended charts by category:&lt;/p&gt;

&lt;h3&gt;
  
  
  Sales Performance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Line chart&lt;/strong&gt; — Sales trends over time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KPI card&lt;/strong&gt; — Total revenue&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bar chart&lt;/strong&gt; — Sales broken down by region&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Product Performance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bar chart&lt;/strong&gt; — Top-selling products&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pie chart&lt;/strong&gt; — Revenue share by product category&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Customer Insights
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Table&lt;/strong&gt; — Top customers ranked by revenue&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Map visual&lt;/strong&gt; — Geographic distribution of customers&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Inventory Insights
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Column chart&lt;/strong&gt; — Current stock levels per product&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;KPI card&lt;/strong&gt; — Low inventory alerts&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why SQL Skills Are Essential for Power BI Analysts
&lt;/h2&gt;

&lt;p&gt;While Power BI has a point-and-click interface for building visuals, a solid understanding of SQL makes you significantly more effective. With SQL, you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Retrieve data efficiently&lt;/strong&gt; — Pull exactly what you need, nothing more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filter datasets&lt;/strong&gt; — Apply conditions before data even enters Power BI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Aggregate data&lt;/strong&gt; — Use functions like &lt;code&gt;SUM&lt;/code&gt;, &lt;code&gt;COUNT&lt;/code&gt;, and &lt;code&gt;AVG&lt;/code&gt; at the query level&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Join multiple tables&lt;/strong&gt; — Combine data from different sources into a single clean dataset&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Writing optimized SQL queries upstream means cleaner data models, faster dashboards, and more reliable analysis. SQL and Power BI aren't competing skills — they're complementary ones.&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>database</category>
      <category>sql</category>
      <category>tutorial</category>
    </item>
    <item>
      <title># ETI and ELT</title>
      <dc:creator>Ajani luke Kariuki</dc:creator>
      <pubDate>Wed, 22 Apr 2026 13:56:10 +0000</pubDate>
      <link>https://dev.to/ajani_lukekariuki_79255c/-eti-and-elt-15h9</link>
      <guid>https://dev.to/ajani_lukekariuki_79255c/-eti-and-elt-15h9</guid>
      <description>&lt;h1&gt;
  
  
  ETL vs ELT: Which One Should You Use and Why?
&lt;/h1&gt;

&lt;p&gt;If you're exploring a career in data engineering or analytics, you've almost certainly stumbled across these two acronyms: &lt;strong&gt;ETL&lt;/strong&gt; and &lt;strong&gt;ELT&lt;/strong&gt;. At first glance they look interchangeable, but they represent fundamentally different philosophies for moving and processing data. Let's break both down clearly.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is ETL?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;ETL&lt;/strong&gt; stands for &lt;strong&gt;Extract, Transform, Load&lt;/strong&gt; — and it does exactly what it says on the tin, in that exact order. It's the traditional backbone of data integration: pull raw data from various sources, clean and reshape it, then load the polished result into a destination system.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Extract
&lt;/h3&gt;

&lt;p&gt;This is the data collection phase. Engineers pull raw, often messy data from a wide range of sources, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Relational databases&lt;/li&gt;
&lt;li&gt;APIs (Application Programming Interfaces)&lt;/li&gt;
&lt;li&gt;Web content&lt;/li&gt;
&lt;li&gt;Economic and financial feeds&lt;/li&gt;
&lt;li&gt;Real estate and weather data&lt;/li&gt;
&lt;li&gt;Surveys and interviews&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The list isn't exhaustive — any system that produces data can be a source.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Transform
&lt;/h3&gt;

&lt;p&gt;This is where the real work happens. Raw data is rarely ready for analysis straight out of the source, so transformation cleans, restructures, and enriches it. Common transformation tasks include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Cleaning&lt;/strong&gt; — Removing errors, duplicates, and null values&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structural Conversion&lt;/strong&gt; — Adding or removing columns, standardizing formats, and imposing structure on unstructured data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Destructive Transformation&lt;/strong&gt; — Dropping irrelevant or outdated fields that would otherwise clutter the dataset&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature Engineering (Attribution)&lt;/strong&gt; — Deriving new fields from existing ones, such as calculating &lt;em&gt;age&lt;/em&gt; from a &lt;em&gt;date of birth&lt;/em&gt;, or combining &lt;em&gt;revenue&lt;/em&gt; and &lt;em&gt;cost&lt;/em&gt; into a single &lt;em&gt;profit/loss&lt;/em&gt; column&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Popular tools for this phase include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Microsoft Excel &amp;amp; Power BI&lt;/li&gt;
&lt;li&gt;dbt (Data Build Tool) for SQL-based transformations&lt;/li&gt;
&lt;li&gt;Pandas (Python)&lt;/li&gt;
&lt;li&gt;Informatica&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Data cleaning is arguably the most critical step in any data pipeline — garbage in, garbage out.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Load
&lt;/h3&gt;

&lt;p&gt;Once transformed, the data is loaded into a target system for storage, analysis, or reporting. Common destinations include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data warehouses&lt;/li&gt;
&lt;li&gt;Data lakes&lt;/li&gt;
&lt;li&gt;Staging areas&lt;/li&gt;
&lt;li&gt;Analytics and reporting repositories&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What Is ELT?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;ELT&lt;/strong&gt; flips the middle two steps: &lt;strong&gt;Extract, Load, Transform&lt;/strong&gt;. Instead of transforming data before it reaches the destination, you load it in its raw form first and transform it &lt;em&gt;inside&lt;/em&gt; the target system — typically a modern cloud data warehouse.&lt;/p&gt;

&lt;p&gt;This approach has grown rapidly in popularity for several reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt; — Cloud platforms like BigQuery, Snowflake, and Redshift are purpose-built to handle massive datasets and can run transformations at high speed and relatively low cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agility&lt;/strong&gt; — Since raw data is already available in the warehouse, analysts and data scientists can build and iterate on transformation models on the fly as business needs evolve, without touching the ingestion pipeline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified Pipelines&lt;/strong&gt; — Fewer intermediate steps means fewer tools, less infrastructure to maintain, and a leaner overall workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster Ingestion&lt;/strong&gt; — Data reaches the target system almost immediately after extraction, which is essential for real-time or near-real-time reporting use cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unstructured Data Support&lt;/strong&gt; — ELT is particularly well-suited for unstructured formats like JSON, images, and video files, which can be stored raw and only processed when a specific analysis demands it.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ETL vs ELT: When to Use Which?
&lt;/h2&gt;

&lt;p&gt;There's no universal answer, but here are some common scenarios that favour one approach over the other:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Recommended Approach&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Strict compliance or data governance requirements&lt;/td&gt;
&lt;td&gt;ETL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Real-time or near-real-time analytics&lt;/td&gt;
&lt;td&gt;ELT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Working with IoT sensor streams&lt;/td&gt;
&lt;td&gt;ETL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Unstructured or semi-structured data&lt;/td&gt;
&lt;td&gt;ELT&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Experimenting with new data sources&lt;/td&gt;
&lt;td&gt;ETL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Large-scale cloud analytics&lt;/td&gt;
&lt;td&gt;ELT&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  IoT Applications
&lt;/h3&gt;

&lt;p&gt;IoT use cases — think sensor networks or connected devices — tend to lean toward ETL because data often arrives in proprietary protocols that need converting to standard formats before they're useful. Deduplication, filling missing values, and filtering high-frequency noise are all easier to handle &lt;em&gt;before&lt;/em&gt; the data hits the cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  Experimentation &amp;amp; Discovery
&lt;/h3&gt;

&lt;p&gt;When data engineers are exploring new sources or testing hypotheses, ETL's multi-tool pipeline offers a granular view of data at each stage, making it easier to debug and validate assumptions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Complex, Multi-Source Analytics
&lt;/h3&gt;

&lt;p&gt;In large organisations, it's common to run &lt;em&gt;both&lt;/em&gt; — ETL for certain source systems or legacy databases, and ELT for cloud-native workloads. The two aren't mutually exclusive.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;Choosing between ETL and ELT ultimately comes down to your specific context. Key factors to weigh include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt; — Cloud ELT can be more economical at scale, but compute costs for in-warehouse transformations add up&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt; — ELT wins on ingestion speed; ETL can be more efficient for targeted, pre-defined transformations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt; — ELT gives analysts more freedom to iterate; ETL offers tighter control&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt; — ETL keeps sensitive data out of the warehouse until it's been cleaned; ELT stores raw data that may include sensitive fields&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team skills&lt;/strong&gt; — The right tool is also the one your team knows how to use well&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Neither ETL nor ELT is inherently superior. Used correctly, both are powerful — and in practice, many modern data stacks use a combination of the two.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>beginners</category>
      <category>data</category>
      <category>dataengineering</category>
    </item>
    <item>
      <title>Connecting powerbi to postgresql</title>
      <dc:creator>Ajani luke Kariuki</dc:creator>
      <pubDate>Wed, 18 Mar 2026 06:21:32 +0000</pubDate>
      <link>https://dev.to/ajani_lukekariuki_79255c/connecting-powerbi-to-postgresql-3j6f</link>
      <guid>https://dev.to/ajani_lukekariuki_79255c/connecting-powerbi-to-postgresql-3j6f</guid>
      <description>&lt;h1&gt;
  
  
  connecting power bi to sql databases: a summary
&lt;/h1&gt;

&lt;h2&gt;
  
  
  introduction
&lt;/h2&gt;

&lt;p&gt;Connecting power bi directly to a database creates a reliable data pipeline. instead of manually importing files, data can be refreshed automatically, allowing analysts to focus more on insights rather than data handling.&lt;/p&gt;




&lt;h2&gt;
  
  
  connecting power bi to a local postgresql database
&lt;/h2&gt;

&lt;p&gt;the process of connecting to a local database follows a few simple steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;open power bi and click "get data"&lt;/li&gt;
&lt;li&gt;select postgresql database from the database options&lt;/li&gt;
&lt;li&gt;enter server details (e.g. localhost or localhost:port)&lt;/li&gt;
&lt;li&gt;input database name&lt;/li&gt;
&lt;li&gt;authenticate using your username and password&lt;/li&gt;
&lt;li&gt;select tables to load or transform data using power query&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;this allows you to import and work with your database tables directly in power bi.&lt;/p&gt;




&lt;h2&gt;
  
  
  connecting to a cloud database (aiven postgresql)
&lt;/h2&gt;

&lt;p&gt;cloud platforms like :contentReference[oaicite:0]{index=0} allow databases to be hosted online instead of locally.&lt;/p&gt;

&lt;p&gt;to connect:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;log in and obtain connection details&lt;/li&gt;
&lt;li&gt;download the ssl certificate for secure communication&lt;/li&gt;
&lt;li&gt;use power bi "get data" → postgresql database&lt;/li&gt;
&lt;li&gt;enter the host, port, and database name&lt;/li&gt;
&lt;li&gt;authenticate and load tables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ssl certificates ensure secure data transfer over the internet when connecting to cloud databases.&lt;/p&gt;




&lt;h2&gt;
  
  
  building the data model
&lt;/h2&gt;

&lt;p&gt;once data is loaded:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;power bi detects relationships using primary and foreign keys&lt;/li&gt;
&lt;li&gt;relationships can also be created manually&lt;/li&gt;
&lt;li&gt;proper modeling connects fact tables (e.g. sales) with dimension tables (e.g. customers, products)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;this structure allows power bi to generate meaningful insights.&lt;/p&gt;




&lt;h2&gt;
  
  
  why sql skills matter for power bi analysts
&lt;/h2&gt;

&lt;p&gt;sql is essential for effective data analysis in power bi:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;data retrieval: fetch only relevant data using queries&lt;/li&gt;
&lt;li&gt;full control: access exactly the data needed without relying on exports&lt;/li&gt;
&lt;li&gt;handling complexity: perform advanced logic and cleaning more efficiently&lt;/li&gt;
&lt;li&gt;industry standard: widely used across all data platforms&lt;/li&gt;
&lt;li&gt;data preparation: simplify joins, transformations, and calculations before loading into power bi&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  conclusion
&lt;/h2&gt;

&lt;p&gt;Strong data modeling and sql knowledge enhance performance, improve data handling, and enable deeper insights. sql acts as the foundation that allows analysts to efficiently interact with databases before visualization in power bi.&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>database</category>
      <category>postgres</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>PostgreSQL Joins and Window Function</title>
      <dc:creator>Ajani luke Kariuki</dc:creator>
      <pubDate>Mon, 02 Mar 2026 12:12:26 +0000</pubDate>
      <link>https://dev.to/ajani_lukekariuki_79255c/postgresql-joins-and-window-function-2mnj</link>
      <guid>https://dev.to/ajani_lukekariuki_79255c/postgresql-joins-and-window-function-2mnj</guid>
      <description>&lt;h2&gt;
  
  
  Understanding JOINS in PostgreSQL
&lt;/h2&gt;

&lt;p&gt;Joins let you merge data from multiple tables (or views) by linking them through related columns.&lt;br&gt;
The choice of join type depends mainly on:&lt;/p&gt;

&lt;p&gt;Which rows you want to keep (including unmatched ones)&lt;br&gt;
How the tables relate to each other&lt;/p&gt;

&lt;h2&gt;
  
  
  Main Join Types
&lt;/h2&gt;

&lt;h2&gt;
  
  
  CROSS JOIN
&lt;/h2&gt;

&lt;p&gt;Creates the Cartesian product - every row from the first table pairs with every row from the second.&lt;br&gt;
No ON clause needed.&lt;/p&gt;

&lt;p&gt;Example result: 5 rows × 5 rows = 25 rows.SQLSELECT p.project_name, e.name, e.salary&lt;br&gt;
FROM sales_data.projects p&lt;br&gt;
CROSS JOIN sales_data.employees e;&lt;/p&gt;

&lt;h2&gt;
  
  
  INNER JOIN
&lt;/h2&gt;

&lt;p&gt;Returns only matching rows from both tables. Non-matching rows are excluded&lt;br&gt;
SELECT emp.name, dep.department_name&lt;br&gt;
FROM sales_data.employees emp&lt;br&gt;
INNER JOIN sales_data.departments dep&lt;br&gt;
ON emp.department_id = dep.department_id;&lt;/p&gt;

&lt;h2&gt;
  
  
  LEFT JOIN (LEFT OUTER JOIN)
&lt;/h2&gt;

&lt;p&gt;Keeps all rows from the left table, plus matching rows from the right.&lt;/p&gt;

&lt;p&gt;SELECT *FROM sales_data.projects p&lt;br&gt;
LEFT JOIN sales_data.employees e&lt;br&gt;
ON p.employee_id = e.employee_id;&lt;/p&gt;

&lt;h2&gt;
  
  
  RIGHT JOIN (RIGHT OUTER JOIN)
&lt;/h2&gt;

&lt;p&gt;Keeps all rows from the right table, plus matching rows from the left. &lt;br&gt;
SELECT *FROM sales_data.projects p&lt;br&gt;
RIGHT JOIN sales_data.employees e&lt;br&gt;
ON p.employee_id = e.employee_id;&lt;/p&gt;

&lt;h2&gt;
  
  
  FULL JOIN (FULL OUTER JOIN)
&lt;/h2&gt;

&lt;p&gt;Returns all rows from both tables. Places NULL where no match exists.SELECT *&lt;br&gt;
FROM sales_data.projects p&lt;br&gt;
FULL JOIN sales_data.employees e&lt;br&gt;
ON p.employee_id = e.employee_id;&lt;/p&gt;

&lt;h2&gt;
  
  
  Enhancing Joins
&lt;/h2&gt;

&lt;p&gt;Add filtering, sorting, etc.:&lt;br&gt;
SELECT *&lt;br&gt;
FROM sales_data.projects p&lt;br&gt;
FULL JOIN sales_data.employees e ON p.employee_id = e.employee_id&lt;br&gt;
WHERE e.employee_id &amp;lt; 4&lt;br&gt;
ORDER BY e.employee_id ASC NULLS LAST,&lt;br&gt;
         p.project_name ASC;&lt;/p&gt;

&lt;h2&gt;
  
  
  Window Functions in PostgreSQL
&lt;/h2&gt;

&lt;p&gt;Window functions compute values over a "window" of rows related to the current row — without collapsing rows like GROUP BY does.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ranking Functions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;ROW_NUMBER() — assigns unique, consecutive numbers (1,2,3,… no ties)&lt;br&gt;
RANK() — same values get the same rank, but skips numbers after ties (1,1,3,…)&lt;br&gt;
DENSE_RANK() — same values get the same rank, no skips (1,1,2,…)&lt;br&gt;
NTILE(n) — divides rows into n roughly equal buckets (good for quartiles, percentiles)&lt;/p&gt;

&lt;p&gt;SQLSELECT *,&lt;br&gt;
       RANK()       OVER (ORDER BY salary DESC NULLS FIRST) AS rank_col,&lt;br&gt;
       DENSE_RANK() OVER (ORDER BY salary DESC NULLS FIRST) AS dense_rank_col,&lt;br&gt;
       ROW_NUMBER() OVER (ORDER BY salary DESC NULLS FIRST) AS row_num&lt;br&gt;
FROM sales_data.working_hub;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Aggregate Functions in Windows&lt;br&gt;
Run aggregates (SUM, AVG, COUNT, MIN, MAX…) across the window without grouping.&lt;br&gt;
SQLSELECT *,&lt;br&gt;
   SUM(e.salary) OVER () AS total_salary_company_wide&lt;br&gt;
FROM sales_data.projects p&lt;br&gt;
FULL JOIN sales_data.employees e ON p.employee_id = e.employee_id;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigation / Value Functions&lt;br&gt;
Compare rows within the ordered window:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;LAG(col) — value from previous row&lt;br&gt;
LEAD(col) — value from next row&lt;br&gt;
FIRST_VALUE(col) — first value in window&lt;br&gt;
LAST_VALUE(col) — last value in window&lt;/p&gt;

&lt;p&gt;SQLSELECT *,&lt;br&gt;
       LAG(salary)  OVER (ORDER BY salary DESC) AS prev_salary,&lt;br&gt;
       LEAD(salary) OVER (ORDER BY salary DESC) AS next_salary&lt;br&gt;
FROM sales_data.working_hub;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;PARTITION BY
Splits data into groups (like GROUP BY), but keeps all rows intact.
SQLSELECT *,
   SUM(e.salary) OVER (PARTITION BY p.project_name) AS salary_per_project
FROM sales_data.projects p
FULL JOIN sales_data.employees e ON p.employee_id = e.employee_id;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>database</category>
      <category>postgres</category>
      <category>sql</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How Analysts Turn Messy Data into Action with Power BI</title>
      <dc:creator>Ajani luke Kariuki</dc:creator>
      <pubDate>Mon, 09 Feb 2026 05:53:16 +0000</pubDate>
      <link>https://dev.to/ajani_lukekariuki_79255c/how-analysts-turn-messy-data-into-action-with-power-bi-37eb</link>
      <guid>https://dev.to/ajani_lukekariuki_79255c/how-analysts-turn-messy-data-into-action-with-power-bi-37eb</guid>
      <description>&lt;h2&gt;
  
  
  The Analyst’s Everyday Challenge
&lt;/h2&gt;

&lt;p&gt;Data analysts are constantly tasked with converting chaotic, unstructured data into insights that decision-makers can actually use. The real value of business intelligence lies in this translation—moving from raw data, to clear visuals, and finally to actions that improve business outcomes. Power BI plays a key role in enabling this end-to-end journey.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Reality of Real-World Data
&lt;/h2&gt;

&lt;p&gt;In most organizations, data is far from perfect. Analysts often deal with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data coming from multiple, unconnected systems (CRMs, ERPs, Excel files)&lt;/li&gt;
&lt;li&gt;Inconsistent naming conventions and formats&lt;/li&gt;
&lt;li&gt;Missing values and duplicate entries&lt;/li&gt;
&lt;li&gt;Uneven or irregular time-based data&lt;/li&gt;
&lt;li&gt;Semi-structured or unstructured sources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before analysis can begin, this data must be cleaned and aligned.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 1: Data Preparation with Power Query
&lt;/h2&gt;

&lt;p&gt;Power Query acts as Power BI’s ETL engine, allowing analysts to connect to external data sources and reshape them into analysis-ready datasets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Typical workflow:&lt;/strong&gt;&lt;br&gt;
&lt;code&gt;Get Data → Choose source → Connect&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;At this stage, analysts profile the data to understand its quality and structure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Transformation Techniques
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Standardization:&lt;/strong&gt; Making dates, currencies, and categories consistent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pivoting / Unpivoting:&lt;/strong&gt; Reshaping data for better analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fuzzy Matching:&lt;/strong&gt; Merging datasets with imperfect keys&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Columns:&lt;/strong&gt; Creating calculated fields during import&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameters:&lt;/strong&gt; Building flexible, refreshable data connections&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Phase 2: Answering Business Questions with DAX
&lt;/h2&gt;

&lt;p&gt;DAX (Data Analysis Expressions) is what turns prepared data into meaningful insights. Without DAX, reports remain descriptive; with it, they become analytical and actionable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key DAX Concepts
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Measures:&lt;/strong&gt; Calculations evaluated dynamically (e.g., Total Profit)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Calculated Columns:&lt;/strong&gt; Values computed and stored at the row level&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DAX shifts analysis from &lt;em&gt;what happened&lt;/em&gt; to &lt;em&gt;why it happened&lt;/em&gt; and &lt;em&gt;what should happen next&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Context Matters
&lt;/h3&gt;

&lt;p&gt;DAX automatically adapts calculations based on filters and selections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Viewing a specific category recalculates metrics for that category only&lt;/li&gt;
&lt;li&gt;Drilling into a specific month updates results accordingly&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Time Intelligence
&lt;/h3&gt;

&lt;p&gt;One of DAX’s strongest capabilities includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Year-over-year comparisons&lt;/li&gt;
&lt;li&gt;Cumulative totals&lt;/li&gt;
&lt;li&gt;Moving averages (e.g., 30-day sales trends)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Phase 3: Designing Dashboards People Actually Understand
&lt;/h2&gt;

&lt;p&gt;This phase focuses on clarity. A good dashboard should communicate insights in under half a minute.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Pyramid Layout
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Top:&lt;/strong&gt; Core KPIs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Middle:&lt;/strong&gt; Trends and comparisons&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bottom:&lt;/strong&gt; Detailed data for deeper analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Slicers are added to allow users to filter data easily, and every visual should answer a specific business question.&lt;/p&gt;




&lt;h2&gt;
  
  
  Phase 4: Turning Insights into Action
&lt;/h2&gt;

&lt;p&gt;The final step is where dashboards create real impact. Power BI can trigger actions such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sending alerts when KPIs cross thresholds&lt;/li&gt;
&lt;li&gt;Creating tasks in Teams or Outlook&lt;/li&gt;
&lt;li&gt;Updating CRM systems&lt;/li&gt;
&lt;li&gt;Exporting data to downstream tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Success is no longer measured by report views, but by decisions made and actions taken.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: Analysts as Decision Enablers
&lt;/h2&gt;

&lt;p&gt;The true purpose of analytics is not advanced formulas or polished visuals. Decision-makers care about outcomes, not the technical steps behind them. An analyst’s job is complete only when a stakeholder can confidently say:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“I know what action to take next.”&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>data</category>
      <category>dataengineering</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Schemas and Data modeling in Power bi</title>
      <dc:creator>Ajani luke Kariuki</dc:creator>
      <pubDate>Mon, 02 Feb 2026 07:32:29 +0000</pubDate>
      <link>https://dev.to/ajani_lukekariuki_79255c/schemas-and-data-modeling-in-power-bi-4p1b</link>
      <guid>https://dev.to/ajani_lukekariuki_79255c/schemas-and-data-modeling-in-power-bi-4p1b</guid>
      <description>&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Star schema&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Snowflake schema&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Relationships&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Facts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dimention Tables&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  star schema vs snowflake schema
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;A star schema&lt;/em&gt;&lt;/strong&gt; is defined as the simplest data warehouse schema where one or more fact tables reference any number of dimension tables in a star-like structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;snowflake schema&lt;/em&gt;&lt;/strong&gt;Is a more normalized version of the star schema where dimension tables are broken down into further tables.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Fact Tables:&lt;/strong&gt;&lt;/em&gt; Tall and narrow, containing measurable, quantitative data (e.g., unit price, quantity).&lt;/p&gt;

&lt;p&gt;_*&lt;em&gt;Dimension Tables: *&lt;/em&gt;_Short and wide, containing descriptive attributes used for filtering and grouping (e.g., product name, location).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Relationships&lt;/strong&gt;&lt;/em&gt;: Power BI typically uses 1-to-many relationships, where dimension tables connect to the fact table, ensuring data integrity. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Schemas&lt;/strong&gt;&lt;/em&gt; organize data into a central, narrow fact table (containing measures/keys) connected to surrounding dimension tables (describing entities) using one-to-many relationships. &lt;/p&gt;

&lt;h1&gt;
  
  
  Power BI: Beginner’s Guide to Data Modeling and Schemas — Summary
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Modern organisations rely heavily on insights derived from raw data to guide decision-making and remain competitive. Business Intelligence (BI) tools play a crucial role in transforming this data into meaningful insights, and &lt;strong&gt;Power BI&lt;/strong&gt; stands out as a powerful, visualisation-friendly solution.&lt;/p&gt;

&lt;p&gt;Power BI enables both technical and non-technical users to understand data through intuitive visuals such as charts, graphs, and dashboards. Central to its effectiveness are &lt;strong&gt;data models&lt;/strong&gt; and &lt;strong&gt;schemas&lt;/strong&gt;, which structure data for efficient analysis.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is Business Intelligence (BI)?
&lt;/h2&gt;

&lt;p&gt;Business Intelligence refers to tools, techniques, and processes used to analyse organisational data and support strategic and operational decisions.&lt;/p&gt;

&lt;p&gt;Power BI supports BI by providing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Descriptive analytics&lt;/li&gt;
&lt;li&gt;Interactive reports&lt;/li&gt;
&lt;li&gt;Dashboards for decision-makers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Well-designed data models ensure that insights are accurate, timely, and cost-effective.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is a Schema in Power BI?
&lt;/h2&gt;

&lt;p&gt;A &lt;strong&gt;schema&lt;/strong&gt; defines how data is structured and how tables relate to each other within a data model.&lt;/p&gt;

&lt;p&gt;Schemas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improve query performance&lt;/li&gt;
&lt;li&gt;Enhance reporting efficiency&lt;/li&gt;
&lt;li&gt;Enable better data analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The two main schemas used in Power BI are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Star Schema&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Snowflake Schema&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Star Schema
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Star Schema&lt;/strong&gt; is the most common and beginner-friendly schema in Power BI. It consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One central &lt;strong&gt;fact table&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Multiple surrounding &lt;strong&gt;dimension tables&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The structure resembles a star.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity&lt;/strong&gt;: Easy to understand and use&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: New dimensions or facts can be added easily&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Fewer table joins result in faster queries&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Dimension Tables vs Fact Tables
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Dimension Tables
&lt;/h3&gt;

&lt;p&gt;Dimension tables store &lt;strong&gt;descriptive attributes&lt;/strong&gt; that provide context to data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Characteristics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Contain primary keys&lt;/li&gt;
&lt;li&gt;Include descriptive fields (e.g., product name, category)&lt;/li&gt;
&lt;li&gt;Used for filtering and grouping data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example attributes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product Name&lt;/li&gt;
&lt;li&gt;Category&lt;/li&gt;
&lt;li&gt;Price&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Fact Tables
&lt;/h3&gt;

&lt;p&gt;Fact tables store &lt;strong&gt;quantitative, measurable data&lt;/strong&gt; related to business activities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Characteristics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Contain foreign keys linking to dimension tables&lt;/li&gt;
&lt;li&gt;Store numerical measures (e.g., sales amount, quantity sold)&lt;/li&gt;
&lt;li&gt;Each row represents a transaction or event&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Key Differences
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Dimension Table&lt;/th&gt;
&lt;th&gt;Fact Table&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Purpose&lt;/td&gt;
&lt;td&gt;Provides context&lt;/td&gt;
&lt;td&gt;Records transactions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Structure&lt;/td&gt;
&lt;td&gt;Fewer rows, more attributes&lt;/td&gt;
&lt;td&gt;Many rows, fewer attributes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Type&lt;/td&gt;
&lt;td&gt;Descriptive&lt;/td&gt;
&lt;td&gt;Numerical&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Snowflake Schema
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;Snowflake Schema&lt;/strong&gt; is an extension of the star schema where dimension tables are further divided into sub-dimensions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strengths
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Faster data retrieval in some cases&lt;/li&gt;
&lt;li&gt;Improved data integrity&lt;/li&gt;
&lt;li&gt;Better data normalization&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Weaknesses
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Higher initial setup cost&lt;/li&gt;
&lt;li&gt;More complex structure&lt;/li&gt;
&lt;li&gt;Less flexible for future changes&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Importance of Good Data Models
&lt;/h2&gt;

&lt;p&gt;Well-designed data models are critical for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accurate reporting&lt;/li&gt;
&lt;li&gt;High-performance dashboards&lt;/li&gt;
&lt;li&gt;Reliable KPI tracking&lt;/li&gt;
&lt;li&gt;Effective decision-making&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without proper data modeling, organisations risk inaccurate insights and poor strategic execution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Power BI leverages strong data modeling principles—particularly star and snowflake schemas—to make data analysis accessible, efficient, and impactful. Creating good data models is essential for building reliable dashboards and enabling organisations to make informed, data-driven decisions.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Ms.Excel for beginners</title>
      <dc:creator>Ajani luke Kariuki</dc:creator>
      <pubDate>Sun, 25 Jan 2026 16:51:41 +0000</pubDate>
      <link>https://dev.to/ajani_lukekariuki_79255c/msexcel-for-beginners-4d4m</link>
      <guid>https://dev.to/ajani_lukekariuki_79255c/msexcel-for-beginners-4d4m</guid>
      <description>&lt;p&gt;Microsoft Excel is a spreadsheet program made by Microsoft that helps you store, organize, calculate, and analyze data in a clear, structured way.&lt;/p&gt;

&lt;h1&gt;
  
  
  Uses of Excel
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Data entry &amp;amp; organization
&lt;/h2&gt;

&lt;p&gt;You enter information in rows and columns &lt;/p&gt;

&lt;h2&gt;
  
  
  Calculations
&lt;/h2&gt;

&lt;p&gt;Excel can automatically do math using formulas and functions (e.g. add totals, calculate averages, percentages).&lt;/p&gt;

&lt;h2&gt;
  
  
  Data analysis
&lt;/h2&gt;

&lt;p&gt;You can sort, filter, and summarize large amounts of data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Charts &amp;amp; graphs
&lt;/h2&gt;

&lt;p&gt;Excel turns numbers into visual charts like bar charts, pie charts, and line graphs.&lt;/p&gt;

&lt;h1&gt;
  
  
  COMMON FEATURES OF EXCEL
&lt;/h1&gt;

&lt;p&gt;Excel has many featuers,used to make it easier to get output from the spreadsheets.&lt;br&gt;
&lt;em&gt;##Common features include##&lt;/em&gt;&lt;br&gt;
 Formulas (e.g. =A1+A2)&lt;/p&gt;

&lt;p&gt;Functions (e.g. SUM, AVERAGE, IF)&lt;/p&gt;

&lt;p&gt;Formatting (colors, borders, fonts)&lt;/p&gt;

&lt;p&gt;Sorting &amp;amp; filtering data&lt;/p&gt;

&lt;p&gt;PivotTables for advanced analysis&lt;/p&gt;

&lt;h2&gt;
  
  
  Decisions with Logical Functions
&lt;/h2&gt;

&lt;p&gt;Logical functions help you categorise data automatically based on rules.&lt;/p&gt;

&lt;p&gt;The IF Function The IF function performs a test: it returns one value if the test is true, and a different value if it is false.&lt;/p&gt;

&lt;p&gt;Example: Imagine you want to categorise salaries. If the salary in cell E2 exceeds 12040, it is "High"; otherwise, it is "Low".&lt;/p&gt;

&lt;p&gt;Formula: =IF(E2 &amp;gt; 12040, "High", "Low").&lt;/p&gt;

&lt;p&gt;Nested IFs. If you have more than two categories (e.g., Old, Middle-aged, Young), you can use a Nested IF, which places a second IF function inside the first one.&lt;/p&gt;

&lt;p&gt;AND / OR Logic You can combine IF with AND (where both conditions must be met) or OR (where at least one condition must be met).&lt;/p&gt;

&lt;p&gt;AND Example: Assign a bonus only if experience &amp;gt; 30 years AND projects &amp;gt; 10.&lt;/p&gt;

&lt;p&gt;Formula: =IF(AND(P2 &amp;gt; 30, D2 &amp;gt; 10), "Assign Bonus", "Do not Assign Bonus").&lt;/p&gt;

&lt;p&gt;##Pivot Tables##&lt;br&gt;
Pivot tables are the ultimate tool for summarising data. They allow you to aggregate thousands of rows into a clear summary table without writing complex formulas.&lt;/p&gt;

&lt;p&gt;How to create one:&lt;/p&gt;

&lt;p&gt;Click a single cell inside your data range (avoid selecting the whole sheet).&lt;/p&gt;

&lt;p&gt;Go to Insert &amp;gt; Pivot Table.&lt;/p&gt;

&lt;p&gt;Drag and Drop fields:&lt;/p&gt;

&lt;p&gt;Rows: For categories (e.g., Department).&lt;/p&gt;

&lt;p&gt;Values: For numbers to calculate (e.g., Sum of Salary, Count of Employees).&lt;/p&gt;

&lt;p&gt;Interactive Slicers: To make your report interactive, insert a Slicer. This is a visual button menu that filters your Pivot Table instantly when clicked.&lt;/p&gt;

&lt;p&gt;A Pivot Table with a Slicer for 'Department' next to it.&lt;/p&gt;

</description>
      <category>database</category>
    </item>
    <item>
      <title>Use of Git to push/pull code, track changes and version control</title>
      <dc:creator>Ajani luke Kariuki</dc:creator>
      <pubDate>Sat, 17 Jan 2026 06:58:01 +0000</pubDate>
      <link>https://dev.to/ajani_lukekariuki_79255c/use-of-git-to-pushpull-code-track-changes-and-version-control-1e1b</link>
      <guid>https://dev.to/ajani_lukekariuki_79255c/use-of-git-to-pushpull-code-track-changes-and-version-control-1e1b</guid>
      <description>&lt;p&gt;Setting up Git and using it for version control involves a few key steps, including installing Git, creating a repository, making commits, and using push and pull commands to sync your work with a remote repository. Here's a detailed guide to help you get started:&lt;br&gt;
&lt;strong&gt;Firstly&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Push and pull
&lt;/h1&gt;

&lt;p&gt;Push-Refers to sending your local changes(&lt;strong&gt;code or data&lt;/strong&gt;) to a remote system or_ repository_(&lt;strong&gt;git hub&lt;/strong&gt;) &lt;br&gt;
Pull -Refers to fetching and intergrating changes from the remote system back to your local environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set Up Git
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;After installing Git, you need to configure it with your usre information.
Using comands like: Git config --global user.name"y/n"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnniw1sp305xpxkpplcqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnniw1sp305xpxkpplcqs.png" alt=" " width="433" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;git config --global user.email "your email"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1khqa7iydxub7o3f51zp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1khqa7iydxub7o3f51zp.png" alt=" " width="631" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next are the linus commands.&lt;br&gt;
 ###First we make a directory###&lt;br&gt;
with this comand: mkdir project1&lt;br&gt;
(project1 being the name of the directory)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95afv20euedp5nz379dh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95afv20euedp5nz379dh.png" alt=" " width="393" height="48"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next change directory&lt;br&gt;
with: cd project1&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleynzbsitkdo8hurowz7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleynzbsitkdo8hurowz7.png" alt=" " width="457" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating repository
&lt;/h2&gt;

&lt;p&gt;After jumping into the project1 or test1, initialise a repository&lt;br&gt;
:git init&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6frr3mcko07dd24po628.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6frr3mcko07dd24po628.png" alt=" " width="736" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeo8hfweou9isbdzghya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeo8hfweou9isbdzghya.png" alt=" " width="800" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Making commits
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Add files to the repository
&lt;/h2&gt;

&lt;p&gt;Use the (git add .) command to prepare the file for commitment&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh7zhu5bjcjzijf2pa0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnh7zhu5bjcjzijf2pa0e.png" alt=" " width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmtz1i7nqorlg7ygspa9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffmtz1i7nqorlg7ygspa9.png" alt=" " width="535" height="73"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next commit&lt;/em&gt;&lt;br&gt;
using: git commit -m "Add readme.md"&lt;/p&gt;

&lt;h2&gt;
  
  
  Pushing changes
&lt;/h2&gt;

&lt;p&gt;To push commits to a remote repository, one must first create a repository and link it to your local repostory&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18evrmp0kac7e7wyzn2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18evrmp0kac7e7wyzn2y.png" alt=" " width="364" height="103"&gt;&lt;/a&gt;&lt;br&gt;
 ##pulling  changes##&lt;br&gt;
To update your local repository with new chaneges from the reote repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ixm6pdzv6ly7suvdvvm.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ixm6pdzv6ly7suvdvvm.webp" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x3ffk12ix7cv0sp489t.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5x3ffk12ix7cv0sp489t.webp" alt=" " width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvur0y48fbuv4yxeoy6kp.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvur0y48fbuv4yxeoy6kp.webp" alt=" " width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>git</category>
      <category>github</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Write a clear, beginner-friendly article on Dev.to explaining how to push and pull code, track changes, and understand version control using Git.</title>
      <dc:creator>Ajani luke Kariuki</dc:creator>
      <pubDate>Tue, 13 Jan 2026 16:47:36 +0000</pubDate>
      <link>https://dev.to/ajani_lukekariuki_79255c/write-a-clear-beginner-friendly-article-on-devto-explaining-how-to-push-and-pull-code-track-13o6</link>
      <guid>https://dev.to/ajani_lukekariuki_79255c/write-a-clear-beginner-friendly-article-on-devto-explaining-how-to-push-and-pull-code-track-13o6</guid>
      <description></description>
    </item>
    <item>
      <title>luxdev Markdown Language Class</title>
      <dc:creator>Ajani luke Kariuki</dc:creator>
      <pubDate>Tue, 13 Jan 2026 11:01:00 +0000</pubDate>
      <link>https://dev.to/ajani_lukekariuki_79255c/luxdev-markdown-language-class-3304</link>
      <guid>https://dev.to/ajani_lukekariuki_79255c/luxdev-markdown-language-class-3304</guid>
      <description>&lt;h1&gt;
  
  
  How to write a markdown language
&lt;/h1&gt;

&lt;p&gt;This is the first markdown language the students have learned and they can now write an article on dev.to&lt;/p&gt;

&lt;h2&gt;
  
  
  The first thing that students learned
&lt;/h2&gt;

&lt;p&gt;The students learned about the heading&lt;/p&gt;

&lt;h3&gt;
  
  
  lastly  but not least
&lt;/h3&gt;

&lt;p&gt;she told me &lt;strong&gt;she loves tech&lt;/strong&gt; and i told her &lt;em&gt;i enjoy it&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Morning class&lt;/li&gt;
&lt;li&gt;Evining class
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name= "Aj"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;here is where you will find me&lt;a href="https://github.com/Aj-4201" rel="noopener noreferrer"&gt;visit my git hub acc on&lt;/a&gt;&lt;/p&gt;

</description>
      <category>beginners</category>
    </item>
  </channel>
</rss>
