<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hassan Ahmed</title>
    <description>The latest articles on DEV Community by Hassan Ahmed (@hassanahmedai).</description>
    <link>https://dev.to/hassanahmedai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hassanahmedai"/>
    <language>en</language>
    <item>
      <title>Fine-Tuning DeepSeek-R1 for Medical QA Using LoRA + Chain-of-Thought</title>
      <dc:creator>Hassan Ahmed</dc:creator>
      <pubDate>Thu, 17 Jul 2025 11:15:38 +0000</pubDate>
      <link>https://dev.to/hassanahmedai/fine-tuning-deepseek-r1-for-medical-qa-using-lora-chain-of-thought-37c5</link>
      <guid>https://dev.to/hassanahmedai/fine-tuning-deepseek-r1-for-medical-qa-using-lora-chain-of-thought-37c5</guid>
      <description>&lt;p&gt;Just published a step-by-step guide on how I fine-tuned DeepSeek-R1 to handle medical question answering using LoRA and chain-of-thought prompting.&lt;/p&gt;

&lt;p&gt;✅ Lightweight fine-tuning with Unsloth&lt;br&gt;
✅ Clinical reasoning dataset&lt;br&gt;
✅ Before vs. after results that actually show improvement&lt;/p&gt;

&lt;p&gt;Read the full post here:&lt;br&gt;
👉 &lt;a href="https://medium.com/@hassanahmed.dev1/fine-tuning-deepseek-r1-for-medical-question-answering-using-lora-d5f74e747e98" rel="noopener noreferrer"&gt;https://medium.com/@hassanahmed.dev1/fine-tuning-deepseek-r1-for-medical-question-answering-using-lora-d5f74e747e98&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to check out the GitHub repo too!&lt;/p&gt;

&lt;h1&gt;
  
  
  machinelearning #llm #genai #ai #deeplearning #medtech #huggingface #transformers
&lt;/h1&gt;

</description>
      <category>deepseek</category>
      <category>llm</category>
      <category>deeplearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>Natural Language to SQL: University Student Data Query App with LLMs</title>
      <dc:creator>Hassan Ahmed</dc:creator>
      <pubDate>Sat, 12 Jul 2025 06:52:48 +0000</pubDate>
      <link>https://dev.to/hassanahmedai/natural-language-to-sql-university-student-data-query-app-with-llms-5c0h</link>
      <guid>https://dev.to/hassanahmedai/natural-language-to-sql-university-student-data-query-app-with-llms-5c0h</guid>
      <description>&lt;p&gt;I recently built a small project that lets you query a university student database using natural language instead of SQL.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it Does
&lt;/h2&gt;

&lt;p&gt;You can type queries like:&lt;br&gt;
&lt;code&gt;Show all students in Data Science&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The app uses a large language model (LLaMA 3 via Groq) to convert your plain-English query into SQL, runs it on a local SQLite database, and shows the results in a simple web UI built with Streamlit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Build This?
&lt;/h2&gt;

&lt;p&gt;I wanted to explore how language models can make databases more accessible for people without SQL knowledge. This also gave me hands-on experience integrating LLMs with backend databases and building a full-stack app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;Streamlit for the UI&lt;/li&gt;
&lt;li&gt;LangChain for prompt management&lt;/li&gt;
&lt;li&gt;Groq API for fast LLaMA 3 inference&lt;/li&gt;
&lt;li&gt;SQLite for the database&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;User enters a natural language query in the UI&lt;/li&gt;
&lt;li&gt;LangChain structures a prompt for the LLM&lt;/li&gt;
&lt;li&gt;Groq’s LLaMA 3 model generates SQL from the query&lt;/li&gt;
&lt;li&gt;SQLite executes the SQL query&lt;/li&gt;
&lt;li&gt;Results display in the web interface&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Check It Out
&lt;/h2&gt;

&lt;p&gt;You can find the full source code here:&lt;br&gt;
&lt;a href="https://github.com/Hassan123j/Natural-Language-to-SQL-University-Student-Data-Query-App" rel="noopener noreferrer"&gt;https://github.com/Hassan123j/Natural-Language-to-SQL-University-Student-Data-Query-App&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feedback and contributions are welcome!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I Built a CNN to Detect Skin Cancer from Images (Beginner ML Project)</title>
      <dc:creator>Hassan Ahmed</dc:creator>
      <pubDate>Tue, 01 Jul 2025 12:33:00 +0000</pubDate>
      <link>https://dev.to/hassanahmedai/i-built-a-cnn-to-detect-skin-cancer-from-images-beginner-ml-project-2pak</link>
      <guid>https://dev.to/hassanahmedai/i-built-a-cnn-to-detect-skin-cancer-from-images-beginner-ml-project-2pak</guid>
      <description>&lt;h1&gt;
  
  
  I Built a CNN to Detect Skin Cancer from Images (Beginner ML Project)
&lt;/h1&gt;

&lt;p&gt;Hey everyone 👋&lt;/p&gt;

&lt;p&gt;Just wanted to share a machine learning project I recently built as part of my learning journey. It's a basic skin cancer detection model using a &lt;strong&gt;Convolutional Neural Network (CNN)&lt;/strong&gt;. The model classifies skin lesion images as &lt;strong&gt;benign&lt;/strong&gt; or &lt;strong&gt;malignant&lt;/strong&gt;, and I tested it locally with a &lt;strong&gt;Streamlit&lt;/strong&gt; app.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Picked This Project
&lt;/h2&gt;

&lt;p&gt;I’m still learning machine learning, and I wanted to try something practical something where I could take an idea, build a model, and test it with real images. Skin cancer is a serious health issue, and early detection helps a lot, so I thought this would be a good starting point for a classification task.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Disclaimer:&lt;/strong&gt; This is an educational project only not for real medical use.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Tools I Used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;TensorFlow + Keras&lt;/li&gt;
&lt;li&gt;Streamlit&lt;/li&gt;
&lt;li&gt;Pillow / NumPy&lt;/li&gt;
&lt;li&gt;Jupyter Notebook&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What the Model Does
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Loads a skin lesion image&lt;/li&gt;
&lt;li&gt;Preprocesses it (resize, normalize)&lt;/li&gt;
&lt;li&gt;Predicts if the image is &lt;strong&gt;benign&lt;/strong&gt; or &lt;strong&gt;malignant&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Shows the result in a local Streamlit interface&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  CNN Architecture (Simplified)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Sequential&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="nc"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;input_shape&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;224&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;224&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="nc"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="nc"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nc"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="nc"&gt;Conv2D&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nc"&gt;MaxPooling2D&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="nc"&gt;Flatten&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;relu&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="nc"&gt;Dense&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;activation&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sigmoid&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Trained with binary cross-entropy since it's a binary classification task.&lt;/p&gt;




&lt;h3&gt;
  
  
  Prediction Function (Streamlit)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;predict_skin_cancer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load_img&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;target_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;224&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;224&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="n"&gt;img_array&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;img_to_array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mf"&gt;255.0&lt;/span&gt;
    &lt;span class="n"&gt;img_array&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;expand_dims&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img_array&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;prediction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;predict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img_array&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Malignant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;prediction&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mf"&gt;0.5&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Benign&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  How It Looks
&lt;/h2&gt;

&lt;p&gt;The Streamlit interface is basic — upload an image, and it shows the prediction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/img.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/img.png" alt="App Screenshot" width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/img_1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/img_1.png" alt="Result Screenshot" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;How CNNs work for image classification&lt;/li&gt;
&lt;li&gt;Preprocessing is super important&lt;/li&gt;
&lt;li&gt;Saving and loading trained models&lt;/li&gt;
&lt;li&gt;How to create quick tools with Streamlit for testing models&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Future Improvements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Better dataset (mine was small)&lt;/li&gt;
&lt;li&gt;Try transfer learning (e.g. MobileNet or EfficientNet)&lt;/li&gt;
&lt;li&gt;Add Grad-CAM for model explainability&lt;/li&gt;
&lt;li&gt;Deploy the app online (Streamlit Cloud or Hugging Face Spaces)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔗 GitHub Repo
&lt;/h2&gt;

&lt;p&gt;Full code is here if you want to check it out or test it locally:&lt;br&gt;
👉 &lt;a href="https://github.com/Hassan123j/Skin-cancer-detection-using-CNN" rel="noopener noreferrer"&gt;https://github.com/Hassan123j/Skin-cancer-detection-using-CNN&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;If you're learning ML like me, feel free to reach out, ask questions, or give feedback. I'd love to hear from others doing similar stuff!&lt;/p&gt;




</description>
    </item>
    <item>
      <title>Building a CKD Risk Predictor with Streamlit and Gradient Boosting</title>
      <dc:creator>Hassan Ahmed</dc:creator>
      <pubDate>Tue, 24 Jun 2025 14:32:00 +0000</pubDate>
      <link>https://dev.to/hassanahmedai/building-a-ckd-risk-predictor-with-streamlit-and-gradient-boosting-3n8f</link>
      <guid>https://dev.to/hassanahmedai/building-a-ckd-risk-predictor-with-streamlit-and-gradient-boosting-3n8f</guid>
      <description>&lt;p&gt;As a student exploring machine learning, I decided to build a project that predicts the risk of Chronic Kidney Disease (CKD) based on patient data. I wanted to create something simple but practical, so I combined a Gradient Boosting model with a Streamlit app to make it interactive and easy to test.&lt;/p&gt;

&lt;p&gt;This post is a walkthrough of how I built the project, the tools I used, and how you can run it on your machine.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the App Does
&lt;/h2&gt;

&lt;p&gt;The app takes some basic clinical inputs, like blood pressure, serum creatinine, and hemoglobin, and predicts whether the patient might be at risk for CKD. It's built using a Gradient Boosting Classifier model and uses Streamlit to create a simple web interface.&lt;/p&gt;

&lt;p&gt;I’m still learning, so it’s not a production-level tool, but it works for the purpose of learning and experimenting with how ML models can be used in real-world applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;p&gt;I kept things simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;scikit-learn for machine learning&lt;/li&gt;
&lt;li&gt;pandas and NumPy for data handling&lt;/li&gt;
&lt;li&gt;Streamlit for the web app&lt;/li&gt;
&lt;li&gt;joblib for saving and loading models&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model&lt;/strong&gt;: I trained a Gradient Boosting Classifier on a dataset of clinical records&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data&lt;/strong&gt;: The dataset (&lt;code&gt;kidney_disease.csv&lt;/code&gt;) includes patient info like blood pressure, creatinine levels, and other factors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preprocessing&lt;/strong&gt;: I handled missing data, scaled the features, and encoded categorical variables&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;App&lt;/strong&gt;: The Streamlit app takes input from the user, runs it through the model, and shows a prediction&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Running It Locally
&lt;/h2&gt;

&lt;p&gt;If you want to try it yourself, here’s how you can run it locally:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clone the repository&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git clone https://github.com/Hassan123j/Chronic-Kidney-Disease-CKD-Predictor-App.git
   &lt;span class="nb"&gt;cd &lt;/span&gt;Chronic-Kidney-Disease-CKD-Predictor-App
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set up a virtual environment&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   python &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv
   &lt;span class="nb"&gt;source &lt;/span&gt;venv/bin/activate      &lt;span class="c"&gt;# On Windows: venv\Scripts\activate&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install the dependencies&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Run the app&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   streamlit run app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the model files (&lt;code&gt;model_gbc.pkl&lt;/code&gt; and &lt;code&gt;scaler.pkl&lt;/code&gt;) are missing, you can regenerate them by running the Jupyter notebook &lt;code&gt;CKD Model.ipynb&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;

&lt;p&gt;Here’s how the project is organized:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── app.py               # Streamlit app
├── CKD Model.ipynb      # Model training notebook
├── kidney_disease.csv   # Dataset
├── requirements.txt     # Python dependencies
└── models/
    ├── model_gbc.pkl    # Trained model
    └── scaler.pkl       # Feature scaler
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;As a learner, I’m planning to improve this project in a few ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add explainability to the model with SHAP or LIME&lt;/li&gt;
&lt;li&gt;Deploy it online, maybe using Streamlit Cloud&lt;/li&gt;
&lt;li&gt;Improve the UI to make it more user-friendly&lt;/li&gt;
&lt;li&gt;Explore other ways to handle missing data and scale the model for better accuracy&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  GitHub Repo
&lt;/h2&gt;

&lt;p&gt;You can find the full code and instructions here:&lt;br&gt;
&lt;a href="https://github.com/Hassan123j/Chronic-Kidney-Disease-CKD-Predictor-App" rel="noopener noreferrer"&gt;GitHub - Chronic Kidney Disease Predictor&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This was a fun project that helped me practice both machine learning and web development. I learned a lot about how to handle datasets, train models, and build a simple interactive app. It’s definitely not perfect, but it was a good starting point, and I plan to continue improving it as I learn more.&lt;/p&gt;

&lt;p&gt;If you have any feedback or suggestions, feel free to reach out. Also, if you’re working on similar projects, I’d love to hear about them!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Built a Legal PDF Question-Answering Tool with AI (RAG + Streamlit)</title>
      <dc:creator>Hassan Ahmed</dc:creator>
      <pubDate>Mon, 23 Jun 2025 04:14:48 +0000</pubDate>
      <link>https://dev.to/hassanahmedai/built-a-legal-pdf-question-answering-tool-with-ai-rag-streamlit-2of1</link>
      <guid>https://dev.to/hassanahmedai/built-a-legal-pdf-question-answering-tool-with-ai-rag-streamlit-2of1</guid>
      <description>&lt;p&gt;&lt;strong&gt;Hey everyone&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;I put together a small project that lets you upload legal PDFs and ask questions about them. The AI gives you answers &lt;em&gt;and&lt;/em&gt; explains how it got there, so it's not just spitting out random stuff.&lt;/p&gt;

&lt;p&gt;If you've ever had to read through a long legal document and thought, “I wish someone could just explain this part,” this might help.&lt;/p&gt;

&lt;p&gt;You can check out the demo here:&lt;br&gt;
&lt;strong&gt;&lt;a href="https://shorturl.at/tZoEu" rel="noopener noreferrer"&gt;https://shorturl.at/tZoEu&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  What It Does
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Upload a legal document (PDF)&lt;/li&gt;
&lt;li&gt;Ask natural language questions&lt;/li&gt;
&lt;li&gt;Get an AI-generated answer with reasoning&lt;/li&gt;
&lt;li&gt;All responses are grounded in the actual content of the document&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s using &lt;strong&gt;Retrieval-Augmented Generation (RAG)&lt;/strong&gt; under the hood, which just means the AI reads your file before trying to answer your question.&lt;/p&gt;


&lt;h2&gt;
  
  
  Run It Locally
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.8 or higher&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; (for generating embeddings)&lt;/li&gt;
&lt;li&gt;A Groq API key (for the language model)&lt;/li&gt;
&lt;/ul&gt;


&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;Clone the repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/Hassan123j/AI-Reasoning-Chatbot.git
&lt;span class="nb"&gt;cd &lt;/span&gt;ai-legal-assistant
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a virtual environment and install the dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv
&lt;span class="nb"&gt;source &lt;/span&gt;venv/bin/activate  &lt;span class="c"&gt;# On Windows: venv\Scripts\activate&lt;/span&gt;
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don’t have the &lt;code&gt;requirements.txt&lt;/code&gt; file, install manually:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;streamlit langchain-groq langchain-community &lt;span class="se"&gt;\&lt;/span&gt;
langchain-text-splitters langchain-ollama langchain-core &lt;span class="se"&gt;\&lt;/span&gt;
faiss-cpu python-dotenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Ollama Setup
&lt;/h3&gt;

&lt;p&gt;Install Ollama and pull the embedding model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull all-minilm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Groq API Key
&lt;/h3&gt;

&lt;p&gt;Sign up at &lt;a href="https://groq.com" rel="noopener noreferrer"&gt;groq.com&lt;/a&gt; and get your API key. You can either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Put it in a &lt;code&gt;.env&lt;/code&gt; file:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  GROQ_API_KEY="your_api_key_here"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Or export it in your terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GROQ_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your_api_key_here"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Add Your PDFs
&lt;/h2&gt;

&lt;p&gt;Make a folder for your documents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;pdfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Drop any legal PDFs you want to query into that folder.&lt;/p&gt;




&lt;h2&gt;
  
  
  Build the Vector Database
&lt;/h2&gt;

&lt;p&gt;This step processes your PDFs into searchable chunks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python vector_database.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Launch the App
&lt;/h2&gt;

&lt;p&gt;Start the Streamlit app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;streamlit run frontend.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will open up a local web interface. From there:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upload your PDF&lt;/li&gt;
&lt;li&gt;Type a question&lt;/li&gt;
&lt;li&gt;Hit Ask&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You’ll get an answer and a quick breakdown of how the AI found it.&lt;/p&gt;




&lt;h2&gt;
  
  
  File Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;frontend.py&lt;/code&gt;: the Streamlit UI&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;rag_pipeline.py&lt;/code&gt;: where the AI logic happens&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;vector_database.py&lt;/code&gt;: breaks down PDFs and builds embeddings&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pdfs/&lt;/code&gt;: your uploaded documents&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;vectorstore/&lt;/code&gt;: saved vector data for retrieval&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Want to Customize It?
&lt;/h2&gt;

&lt;p&gt;Everything is modular. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Swap out the AI model&lt;/li&gt;
&lt;li&gt;Change how PDFs are chunked&lt;/li&gt;
&lt;li&gt;Tweak the prompt/response format&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just dive into the code and experiment.&lt;/p&gt;




&lt;h2&gt;
  
  
  Contributions Welcome
&lt;/h2&gt;

&lt;p&gt;If you’ve got ideas, feedback, bug reports, or want to help improve the project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fork it&lt;/li&gt;
&lt;li&gt;Open an issue&lt;/li&gt;
&lt;li&gt;Submit a PR&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the repo:&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/Hassan123j/AI-Reasoning-Chatbot" rel="noopener noreferrer"&gt;https://github.com/Hassan123j/AI-Reasoning-Chatbot&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




</description>
    </item>
  </channel>
</rss>
