<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Manognya Lokesh Reddy</title>
    <description>The latest articles on DEV Community by Manognya Lokesh Reddy (@manognya_l_4c9cee046e761).</description>
    <link>https://dev.to/manognya_l_4c9cee046e761</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/manognya_l_4c9cee046e761"/>
    <language>en</language>
    <item>
      <title>Bias and Fairness in AI Models: Why Responsible AI Matters</title>
      <dc:creator>Manognya Lokesh Reddy</dc:creator>
      <pubDate>Tue, 28 Oct 2025 01:55:34 +0000</pubDate>
      <link>https://dev.to/manognya_l_4c9cee046e761/bias-and-fairness-in-ai-models-why-responsible-ai-matters-416o</link>
      <guid>https://dev.to/manognya_l_4c9cee046e761/bias-and-fairness-in-ai-models-why-responsible-ai-matters-416o</guid>
      <description>&lt;p&gt;Hi Dev Community! 👋&lt;/p&gt;

&lt;p&gt;I’m Manognya Lokesh Reddy, an AI researcher and engineer currently pursuing my Master’s in Artificial Intelligence at the University of Michigan-Dearborn.&lt;/p&gt;

&lt;p&gt;Through my academic and professional journey, I’ve realized that AI isn’t just about accuracy — it’s about fairness, accountability, and trust. In this blog, I’ll summarize the key insights from my research paper:&lt;br&gt;
“Bias and Fairness in AI Models”, published in the International Journal of Innovative Science and Research Technology (IJISRT, 2024).&lt;/p&gt;

&lt;p&gt;🧠 The Core Idea&lt;/p&gt;

&lt;p&gt;AI systems are being used in everything — from job recruitment and banking to healthcare and criminal justice. But what happens when the data these models learn from is biased?&lt;/p&gt;

&lt;p&gt;An algorithm trained on biased data will almost always produce biased predictions, even if it was never “taught” to discriminate.&lt;/p&gt;

&lt;p&gt;The goal of my research was to:&lt;/p&gt;

&lt;p&gt;Identify sources of bias in AI models&lt;/p&gt;

&lt;p&gt;Explore methods to measure and mitigate bias&lt;/p&gt;

&lt;p&gt;Propose strategies for building fairer, transparent models&lt;/p&gt;

&lt;p&gt;⚙️ Understanding AI Bias&lt;/p&gt;

&lt;p&gt;Bias can creep into AI systems in many ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Bias&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When the training data doesn’t represent all groups equally.&lt;br&gt;
Example: A facial recognition dataset containing mostly lighter-skinned faces will perform poorly for darker-skinned individuals.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Algorithmic Bias&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When the model amplifies existing inequalities.&lt;br&gt;
Example: A hiring model that favors certain colleges because historical data shows past hires came from there.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Evaluation Bias&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When the test data or performance metrics themselves are skewed.&lt;br&gt;
Example: Evaluating model accuracy without considering subgroup accuracy (e.g., gender, ethnicity).&lt;/p&gt;

&lt;p&gt;📏 Measuring Fairness&lt;/p&gt;

&lt;p&gt;There are several metrics researchers use to quantify fairness in AI systems:&lt;/p&gt;

&lt;p&gt;Demographic Parity: Equal outcomes for all groups&lt;/p&gt;

&lt;p&gt;Equalized Odds: Equal error rates across groups&lt;/p&gt;

&lt;p&gt;Disparate Impact Ratio: Ratio of favorable outcomes between protected vs. unprotected groups&lt;/p&gt;

&lt;p&gt;By combining these, developers can identify where bias exists and how severe it is.&lt;/p&gt;

&lt;p&gt;🧩 Techniques to Reduce Bias&lt;/p&gt;

&lt;p&gt;My research explored multiple strategies for bias mitigation:&lt;/p&gt;

&lt;p&gt;Preprocessing Techniques – Balancing datasets using resampling, reweighting, or synthetic data generation.&lt;/p&gt;

&lt;p&gt;In-Processing Methods – Adding fairness constraints directly in model training (e.g., adversarial debiasing).&lt;/p&gt;

&lt;p&gt;Post-Processing Methods – Adjusting model predictions after training to achieve fairness.&lt;/p&gt;

&lt;p&gt;Each method has trade-offs — for instance, fairness constraints can reduce model accuracy slightly, but the ethical gains are far more valuable.&lt;/p&gt;

&lt;p&gt;🌍 Why It Matters&lt;/p&gt;

&lt;p&gt;AI isn’t just a technical tool — it influences people’s lives.&lt;br&gt;
From credit approvals to medical diagnoses, an unfair algorithm can have real-world consequences.&lt;/p&gt;

&lt;p&gt;By incorporating fairness and transparency into our pipelines, we can build AI systems that are:&lt;br&gt;
✅ More inclusive&lt;br&gt;
✅ More accountable&lt;br&gt;
✅ More trustworthy&lt;/p&gt;

&lt;p&gt;🧠 Key Takeaways&lt;/p&gt;

&lt;p&gt;AI bias originates mostly from human bias in data.&lt;/p&gt;

&lt;p&gt;Fairness must be treated as a core performance metric, not an afterthought.&lt;/p&gt;

&lt;p&gt;Building responsible AI requires collaboration between engineers, ethicists, and domain experts.&lt;/p&gt;

&lt;p&gt;Transparency builds trust — explainability tools like LIME, SHAP, and model cards can help communicate model behavior.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>discuss</category>
    </item>
    <item>
      <title>🧬 Predicting Cancer Cells with Machine Learning: My Internship Project</title>
      <dc:creator>Manognya Lokesh Reddy</dc:creator>
      <pubDate>Mon, 11 Aug 2025 16:11:27 +0000</pubDate>
      <link>https://dev.to/manognya_l_4c9cee046e761/predicting-cancer-cells-with-machine-learning-my-internship-project-1ll8</link>
      <guid>https://dev.to/manognya_l_4c9cee046e761/predicting-cancer-cells-with-machine-learning-my-internship-project-1ll8</guid>
      <description>&lt;p&gt;Hey Devs! 👋&lt;/p&gt;

&lt;p&gt;I’m Manognya Lokesh Reddy, currently pursuing my Master’s in Artificial Intelligence. During my internship at Prinston Smart Engineer, I built a Cancer Cell Prediction model that achieved 92–95% accuracy and helped reduce false positives by 15%.&lt;/p&gt;

&lt;p&gt;In this blog, I’ll walk through the problem, approach, and lessons I learned from applying ML to medical diagnosis.&lt;/p&gt;

&lt;p&gt;⚕️ The Problem&lt;br&gt;
Cancer diagnosis often involves analyzing biopsy and cell sample data to identify malignant (cancerous) vs. benign cells.&lt;br&gt;
Manual analysis is time-consuming and prone to errors, especially when the cell features are subtle.&lt;/p&gt;

&lt;p&gt;Our goal was to:&lt;/p&gt;

&lt;p&gt;Use machine learning to classify cells with high accuracy&lt;/p&gt;

&lt;p&gt;Reduce false positives to avoid unnecessary stress and medical procedures&lt;/p&gt;

&lt;p&gt;Make the model interpretable for doctors&lt;/p&gt;

&lt;p&gt;🛠️ Tech Stack&lt;br&gt;
Python&lt;/p&gt;

&lt;p&gt;Pandas + NumPy – for data handling&lt;/p&gt;

&lt;p&gt;Scikit-learn – for ML modeling&lt;/p&gt;

&lt;p&gt;Matplotlib + Seaborn – for visualizations&lt;/p&gt;

&lt;p&gt;Jupyter Notebook – for experimentation&lt;/p&gt;

&lt;p&gt;🧪 Workflow Breakdown&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;📊 Data Preparation
Loaded a publicly available Breast Cancer Wisconsin dataset&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Checked for missing values and handled them appropriately&lt;/p&gt;

&lt;p&gt;Standardized features for better model performance&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;🔍 Exploratory Data Analysis
Visualized distributions of features like cell size, texture, and clump thickness&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Used correlation heatmaps to identify important predictors&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;🧠 Model Selection
Tested Logistic Regression, Random Forest, and SVM&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Tuned hyperparameters using GridSearchCV&lt;/p&gt;

&lt;p&gt;Chose the best model based on accuracy, recall, and F1-score&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;📈 Evaluation
Achieved 92–95% accuracy on the test set&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Reduced false positives by 15%&lt;/p&gt;

&lt;p&gt;Presented feature importance graphs to make results explainable for medical teams&lt;/p&gt;

&lt;p&gt;📊 Results&lt;br&gt;
✅ High accuracy &amp;amp; balanced recall/precision&lt;/p&gt;

&lt;p&gt;⚡ Reduced misdiagnosis risk&lt;/p&gt;

&lt;p&gt;🩺 Model outputs interpretable for non-technical users&lt;/p&gt;

&lt;p&gt;💡 What I Learned&lt;br&gt;
In healthcare, false positives and false negatives carry very different risks—you must optimize carefully&lt;/p&gt;

&lt;p&gt;Model explainability matters just as much as accuracy&lt;/p&gt;

&lt;p&gt;Collaborating with medical professionals gives context that purely technical work lacks&lt;/p&gt;

&lt;p&gt;🌍 Real-World Potential&lt;br&gt;
Could be deployed in diagnostic labs to support decision-making&lt;/p&gt;

&lt;p&gt;Useful in rural healthcare centers where expert pathologists aren’t available&lt;/p&gt;

&lt;p&gt;Could be extended to detect other diseases with similar datasets&lt;/p&gt;

</description>
    </item>
    <item>
      <title>📃 Automating Insurance Document Processing with OCR and NLP</title>
      <dc:creator>Manognya Lokesh Reddy</dc:creator>
      <pubDate>Tue, 05 Aug 2025 19:55:57 +0000</pubDate>
      <link>https://dev.to/manognya_l_4c9cee046e761/automating-insurance-document-processing-with-ocr-and-nlp-74a</link>
      <guid>https://dev.to/manognya_l_4c9cee046e761/automating-insurance-document-processing-with-ocr-and-nlp-74a</guid>
      <description>&lt;p&gt;Hey Dev Community! 👋&lt;/p&gt;

&lt;p&gt;I’m Manognya Lokesh Reddy, an AI grad student and ML engineer. In this post, I’ll share a project I worked on during my Co-op at Donyati India Pvt Ltd, where we built an intelligent system to digitize and extract information from handwritten insurance documents using OCR, NLP, and Computer Vision.&lt;/p&gt;

&lt;p&gt;🧩 The Problem&lt;br&gt;
In the insurance domain, processing scanned or handwritten documents is a manual, time-consuming task. It often involves:&lt;/p&gt;

&lt;p&gt;Human data entry (error-prone and expensive)&lt;/p&gt;

&lt;p&gt;Difficulty in reading handwritten or low-quality text&lt;/p&gt;

&lt;p&gt;No structured digital storage for later retrieval&lt;/p&gt;

&lt;p&gt;So we built a document processing system that:&lt;/p&gt;

&lt;p&gt;Uses OCR to read handwritten text&lt;/p&gt;

&lt;p&gt;Applies NLP to structure the extracted data&lt;/p&gt;

&lt;p&gt;Delivers clean, searchable digital records&lt;/p&gt;

&lt;p&gt;⚙️ Tech Stack&lt;br&gt;
Python&lt;/p&gt;

&lt;p&gt;Tesseract OCR + OpenCV – for text extraction&lt;/p&gt;

&lt;p&gt;NLTK / spaCy – for text preprocessing and structuring&lt;/p&gt;

&lt;p&gt;Custom pipelines – for automation and error handling&lt;/p&gt;

&lt;p&gt;Flask – to prototype internal APIs&lt;/p&gt;

&lt;p&gt;🏗️ What We Built&lt;br&gt;
🔍 Step 1: Image Preprocessing&lt;br&gt;
Used OpenCV for:&lt;/p&gt;

&lt;p&gt;Denoising&lt;/p&gt;

&lt;p&gt;Thresholding&lt;/p&gt;

&lt;p&gt;Skew correction&lt;/p&gt;

&lt;p&gt;Enhanced image clarity for better OCR results&lt;/p&gt;

&lt;p&gt;🔡 Step 2: OCR Extraction&lt;br&gt;
Used Tesseract to extract both printed and handwritten content&lt;/p&gt;

&lt;p&gt;Achieved up to 97% accuracy with tuned configurations and training on custom handwriting samples&lt;/p&gt;

&lt;p&gt;🧠 Step 3: NLP Pipeline&lt;br&gt;
Cleaned text with:&lt;/p&gt;

&lt;p&gt;Lowercasing&lt;/p&gt;

&lt;p&gt;Removing stopwords&lt;/p&gt;

&lt;p&gt;Entity recognition&lt;/p&gt;

&lt;p&gt;Parsed key fields like:&lt;/p&gt;

&lt;p&gt;Policyholder name&lt;/p&gt;

&lt;p&gt;Claim amount&lt;/p&gt;

&lt;p&gt;Date of accident&lt;/p&gt;

&lt;p&gt;Medical report status&lt;/p&gt;

&lt;p&gt;🔄 Step 4: Structuring &amp;amp; Output&lt;br&gt;
Exported extracted fields into:&lt;/p&gt;

&lt;p&gt;JSON for APIs&lt;/p&gt;

&lt;p&gt;CSV for dashboard ingestion&lt;/p&gt;

&lt;p&gt;Built a lightweight dashboard for visualization (internal use)&lt;/p&gt;

&lt;p&gt;📊 Results&lt;br&gt;
📌 97% OCR accuracy for handwritten documents&lt;/p&gt;

&lt;p&gt;⏱️ Reduced document processing time by 30%&lt;/p&gt;

&lt;p&gt;📁 Enabled structured, searchable storage for thousands of forms&lt;/p&gt;

&lt;p&gt;📉 Minimized manual errors and improved regulatory compliance&lt;/p&gt;

&lt;p&gt;💡 What I Learned&lt;br&gt;
OCR needs custom preprocessing to be truly accurate—raw images won’t cut it&lt;/p&gt;

&lt;p&gt;Handwriting OCR is still challenging, but small tweaks in image prep + model configs go a long way&lt;/p&gt;

&lt;p&gt;NLP helps bring structure to chaotic data—without it, you're just digitizing noise&lt;/p&gt;

&lt;p&gt;This kind of automation can save millions in ops costs in large enterprises&lt;/p&gt;

&lt;p&gt;🌍 Applications Beyond Insurance&lt;br&gt;
Legal document scanning&lt;/p&gt;

&lt;p&gt;Bank KYC automation&lt;/p&gt;

&lt;p&gt;Hospital record digitization&lt;/p&gt;

&lt;p&gt;HR form processing in enterprises&lt;/p&gt;

&lt;p&gt;🚀 What’s Next?&lt;br&gt;
Adding language translation to support multilingual document digitization&lt;/p&gt;

&lt;p&gt;Integrating signature verification for fraud prevention&lt;/p&gt;

&lt;p&gt;Exploring LLM-powered extraction with LangChain + GPT for deeper semantic parsing&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🧬 Skin Disease Detection with CNNs: Building an AI Tool for Early Diagnosis</title>
      <dc:creator>Manognya Lokesh Reddy</dc:creator>
      <pubDate>Thu, 17 Jul 2025 23:06:27 +0000</pubDate>
      <link>https://dev.to/manognya_l_4c9cee046e761/skin-disease-detection-with-cnns-building-an-ai-tool-for-early-diagnosis-502a</link>
      <guid>https://dev.to/manognya_l_4c9cee046e761/skin-disease-detection-with-cnns-building-an-ai-tool-for-early-diagnosis-502a</guid>
      <description>&lt;p&gt;👨‍⚕️ The Problem&lt;br&gt;
Skin diseases are among the most common medical issues—but diagnosing them accurately requires expert knowledge and can be error-prone in early stages. In under-resourced settings, delays in treatment can lead to severe complications.&lt;/p&gt;

&lt;p&gt;I aimed to build a Convolutional Neural Network (CNN) that could analyze images of skin lesions and classify diseases with high accuracy.&lt;/p&gt;

&lt;p&gt;🛠️ Tools &amp;amp; Tech Stack&lt;br&gt;
Python&lt;/p&gt;

&lt;p&gt;TensorFlow + Keras – for building and training the CNN&lt;/p&gt;

&lt;p&gt;OpenCV – for image preprocessing&lt;/p&gt;

&lt;p&gt;Matplotlib / Seaborn – for visualizations&lt;/p&gt;

&lt;p&gt;Jupyter Notebook – for experimentation&lt;/p&gt;

&lt;p&gt;🧪 Workflow Breakdown&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;🗃️ Dataset Collection
Combined data from multiple open-source datasets of skin diseases.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Performed label mapping to classify disease categories (e.g., eczema, psoriasis, vitiligo).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;🖼️ Data Preprocessing
Resized and normalized all images to a consistent size (e.g., 224x224).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Applied image augmentation (rotation, zoom, flip) to improve generalization.&lt;/p&gt;

&lt;p&gt;Used OpenCV to enhance image clarity and contrast.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;🧠 Model Architecture
Built a custom CNN with multiple convolutional + max-pooling layers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Used ReLU activation and Dropout to prevent overfitting.&lt;/p&gt;

&lt;p&gt;Final layers used Softmax for multi-class classification.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;📈 Training &amp;amp; Evaluation
Achieved over 90% accuracy on test data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Evaluated with confusion matrix and F1-score to check precision/recall balance.&lt;/p&gt;

&lt;p&gt;Compared results with pretrained models (MobileNet, VGG16) for benchmarking.&lt;/p&gt;

&lt;p&gt;📊 Results&lt;br&gt;
📌 &amp;gt;90% classification accuracy on real-world images&lt;/p&gt;

&lt;p&gt;🚑 Helped detect critical diseases earlier than baseline models&lt;/p&gt;

&lt;p&gt;💡 Reduced false diagnoses by 15–20% with preprocessing and augmentation&lt;/p&gt;

&lt;p&gt;💡 What I Learned&lt;br&gt;
Clean, augmented data matters more than just a complex model.&lt;/p&gt;

&lt;p&gt;Custom CNNs can compete well when fine-tuned for domain-specific data.&lt;/p&gt;

&lt;p&gt;Healthcare AI must prioritize precision and recall, not just accuracy.&lt;/p&gt;

&lt;p&gt;Visualization of errors (misclassified images) gave major insight into failure cases.&lt;/p&gt;

&lt;p&gt;🌍 Real-World Potential&lt;br&gt;
Telemedicine apps for early detection and triage&lt;/p&gt;

&lt;p&gt;Rural healthcare where dermatologists are scarce&lt;/p&gt;

&lt;p&gt;Integration into smartphones or kiosks for patient self-screening&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🧬 Enhancing Historical Images with GANs: My AEGAN + SRGAN Project</title>
      <dc:creator>Manognya Lokesh Reddy</dc:creator>
      <pubDate>Tue, 01 Jul 2025 16:28:25 +0000</pubDate>
      <link>https://dev.to/manognya_l_4c9cee046e761/enhancing-historical-images-with-gans-my-aegan-srgan-project-528b</link>
      <guid>https://dev.to/manognya_l_4c9cee046e761/enhancing-historical-images-with-gans-my-aegan-srgan-project-528b</guid>
      <description>&lt;p&gt;Hey everyone! 👋&lt;/p&gt;

&lt;p&gt;I’m Manognya Lokesh Reddy, currently pursuing my Master’s in Artificial Intelligence at the University of Michigan-Dearborn. In this post, I’ll take you behind the scenes of one of my most technically demanding and visually exciting projects: high-resolution image synthesis using GANs.&lt;/p&gt;

&lt;p&gt;I built a system that takes low-resolution or black-and-white historical images and transforms them into sharp, colored, high-quality outputs using a combination of AEGAN (Auto-Embedding GAN) and SRGAN (Super-Resolution GAN).&lt;/p&gt;

&lt;p&gt;🖼️ The Problem&lt;br&gt;
Low-resolution or faded historical images limit both aesthetic appeal and analytical value. Traditional upscaling doesn’t preserve fine detail, and manual enhancement is time-consuming.&lt;/p&gt;

&lt;p&gt;The goal? Use Generative Adversarial Networks (GANs) to:&lt;/p&gt;

&lt;p&gt;Upscale image resolution&lt;/p&gt;

&lt;p&gt;Colorize black-and-white photos&lt;/p&gt;

&lt;p&gt;Retain edge quality and texture&lt;/p&gt;

&lt;p&gt;Do it fast and accurately&lt;/p&gt;

&lt;p&gt;⚙️ Tech Stack&lt;br&gt;
Python&lt;/p&gt;

&lt;p&gt;PyTorch – for building AEGAN and SRGAN models&lt;/p&gt;

&lt;p&gt;Docker – for reproducible deployment&lt;/p&gt;

&lt;p&gt;JWT (JSON Web Tokens) – for secure REST API endpoints&lt;/p&gt;

&lt;p&gt;Mixed Precision Training – to optimize GPU usage&lt;/p&gt;

&lt;p&gt;OpenCV – for image pre/post-processing&lt;/p&gt;

&lt;p&gt;🧠 How It Works&lt;br&gt;
🔗 AEGAN Architecture&lt;br&gt;
Combines auto-encoders and GANs to embed image features in a compressed space&lt;/p&gt;

&lt;p&gt;Stabilized training with gradient penalty and Wasserstein loss&lt;/p&gt;

&lt;p&gt;Mixed precision training helped accelerate training by ~60% on GPU&lt;/p&gt;

&lt;p&gt;🔎 SRGAN Module&lt;br&gt;
Generates 4x super-resolved outputs&lt;/p&gt;

&lt;p&gt;Preserves perceptual detail using VGG-based content loss&lt;/p&gt;

&lt;p&gt;Fine-tuned using perceptual + adversarial losses for photorealism&lt;/p&gt;

&lt;p&gt;🔐 Scalable API Access&lt;br&gt;
Deployed model using a Docker container&lt;/p&gt;

&lt;p&gt;Secured API with JWT authentication to restrict access and support production scalability&lt;/p&gt;

&lt;p&gt;📊 Results&lt;br&gt;
✅ 37% improvement in FID score (Frechet Inception Distance) over baseline GANs&lt;/p&gt;

&lt;p&gt;⚡ 60% faster inference time with GPU optimization&lt;/p&gt;

&lt;p&gt;🖼️ Output quality good enough for archival, artistic, and medical use cases&lt;/p&gt;

&lt;p&gt;🧪 Deployment time reduced by 50% using Dockerized workflows&lt;/p&gt;

&lt;p&gt;💡 What I Learned&lt;br&gt;
GAN training is unstable—techniques like gradient penalty and mixed precision training are critical&lt;/p&gt;

&lt;p&gt;Embedding-based architectures like AEGAN offer superior feature reconstruction&lt;/p&gt;

&lt;p&gt;Docker and JWT are must-have tools for secure, scalable ML deployment&lt;/p&gt;

&lt;p&gt;Working with image data brings unique challenges (e.g., lighting variance, compression noise)&lt;/p&gt;

&lt;p&gt;📸 Use Cases&lt;br&gt;
Historical image restoration&lt;/p&gt;

&lt;p&gt;Medical image enhancement&lt;/p&gt;

&lt;p&gt;Satellite image upscaling&lt;/p&gt;

&lt;p&gt;Artistic rendering for museums or documentaries&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🦴 Detecting Bone Fractures with Deep Learning: A Healthcare AI Project</title>
      <dc:creator>Manognya Lokesh Reddy</dc:creator>
      <pubDate>Mon, 23 Jun 2025 17:47:25 +0000</pubDate>
      <link>https://dev.to/manognya_l_4c9cee046e761/detecting-bone-fractures-with-deep-learning-a-healthcare-ai-project-5c3h</link>
      <guid>https://dev.to/manognya_l_4c9cee046e761/detecting-bone-fractures-with-deep-learning-a-healthcare-ai-project-5c3h</guid>
      <description>&lt;p&gt;Hey Devs! 👋&lt;/p&gt;

&lt;p&gt;I’m Manognya Lokesh Reddy, currently pursuing my Master’s in AI. In this post, I’ll walk you through one of my favorite projects—using deep learning to detect bone fractures in X-ray and MRI images. If you're curious about how AI is transforming healthcare, this one’s for you!&lt;/p&gt;

&lt;p&gt;⚕️ The Problem&lt;br&gt;
Doctors often rely on X-rays and MRIs to detect fractures, but:&lt;/p&gt;

&lt;p&gt;Manual interpretation is time-consuming&lt;/p&gt;

&lt;p&gt;Subtle fractures can be missed&lt;/p&gt;

&lt;p&gt;Resource limitations exist in rural and under-equipped clinics&lt;/p&gt;

&lt;p&gt;So, I set out to build an AI-based system that could automatically detect bone fractures and support radiologists in diagnosis.&lt;/p&gt;

&lt;p&gt;🛠️ Tech Stack&lt;br&gt;
Python&lt;/p&gt;

&lt;p&gt;TensorFlow, Keras – for model building&lt;/p&gt;

&lt;p&gt;OpenCV – for image processing&lt;/p&gt;

&lt;p&gt;CNN (Convolutional Neural Networks) – for classification&lt;/p&gt;

&lt;p&gt;SVM (Support Vector Machines) – as a traditional baseline comparison&lt;/p&gt;

&lt;p&gt;🧪 How the Model Works&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Collection &amp;amp; Preprocessing
Collected and cleaned X-ray/MRI image datasets&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Applied preprocessing:&lt;/p&gt;

&lt;p&gt;Noise reduction&lt;/p&gt;

&lt;p&gt;Contrast enhancement&lt;/p&gt;

&lt;p&gt;Feature extraction&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Model Building
Built a CNN to classify images as fractured or not fractured&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Tuned layers, filters, activation functions for optimal performance&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Training &amp;amp; Validation
Used stratified train-test splits&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Compared results with SVM to validate deep learning benefits&lt;/p&gt;

&lt;p&gt;📊 Results&lt;br&gt;
Achieved 92–95% accuracy on unseen test data&lt;/p&gt;

&lt;p&gt;Reduced false negatives significantly (critical in medical diagnosis)&lt;/p&gt;

&lt;p&gt;Boosted early detection, aiding in personalized treatment planning&lt;/p&gt;

&lt;p&gt;🧠 What I Learned&lt;br&gt;
Medical AI requires precision—you can't afford high error rates&lt;/p&gt;

&lt;p&gt;Collaborating with healthcare professionals helped me align the model with clinical needs&lt;/p&gt;

&lt;p&gt;Even basic preprocessing can drastically improve model performance in medical imaging&lt;/p&gt;

&lt;p&gt;🩺 Why It Matters&lt;br&gt;
AI isn’t just about tech—it’s about impact. With models like these:&lt;/p&gt;

&lt;p&gt;We can support doctors in early diagnosis&lt;/p&gt;

&lt;p&gt;Reduce diagnostic delays in underserved areas&lt;/p&gt;

&lt;p&gt;Help build scalable, accessible health solutions&lt;/p&gt;

&lt;p&gt;🚀 Next Steps&lt;br&gt;
Planning to deploy the model via a web-based interface&lt;/p&gt;

&lt;p&gt;Exploring integration with hospital workflows&lt;/p&gt;

&lt;p&gt;Want to expand to multi-disease detection using the same framework&lt;/p&gt;

</description>
    </item>
    <item>
      <title>📄 How I Built DocuDetective.AI: A Chatbot for Interactive PDF Analysis</title>
      <dc:creator>Manognya Lokesh Reddy</dc:creator>
      <pubDate>Thu, 12 Jun 2025 16:26:55 +0000</pubDate>
      <link>https://dev.to/manognya_l_4c9cee046e761/how-i-built-docudetectiveai-a-chatbot-for-interactive-pdf-analysis-2de9</link>
      <guid>https://dev.to/manognya_l_4c9cee046e761/how-i-built-docudetectiveai-a-chatbot-for-interactive-pdf-analysis-2de9</guid>
      <description>&lt;p&gt;Hi Dev Community! 👋&lt;/p&gt;

&lt;p&gt;I’m Manognya Lokesh Reddy, and I love building practical AI tools that solve real-world problems. Today, I want to share a project that’s especially close to me—DocuDetective.AI, an AI-powered chatbot I built that lets users interact with PDF documents using natural language.&lt;/p&gt;

&lt;p&gt;If you’ve ever struggled to find specific info in a 100+ page document, you’ll love this.&lt;/p&gt;

&lt;p&gt;🤔 The Problem&lt;br&gt;
PDF documents are everywhere—research papers, business contracts, legal reports—but they’re hard to search through, especially when:&lt;/p&gt;

&lt;p&gt;You’re looking for specific information fast.&lt;/p&gt;

&lt;p&gt;The document is in a language you don’t understand.&lt;/p&gt;

&lt;p&gt;You want a summary instead of reading the whole thing.&lt;/p&gt;

&lt;p&gt;DocuDetective.AI was built to solve all of this.&lt;/p&gt;

&lt;p&gt;🧠 Project Goals&lt;br&gt;
Allow users to upload PDF files.&lt;/p&gt;

&lt;p&gt;Ask questions like “What is the main conclusion?” or “Translate this section.”&lt;/p&gt;

&lt;p&gt;Support vernacular language translation.&lt;/p&gt;

&lt;p&gt;Use AI to chat with the document in real-time.&lt;/p&gt;

&lt;p&gt;🛠️ Tech Stack&lt;br&gt;
Python&lt;/p&gt;

&lt;p&gt;LangChain – for chaining LLMs with document loaders and memory&lt;/p&gt;

&lt;p&gt;Chroma DB – for vector embeddings and document retrieval&lt;/p&gt;

&lt;p&gt;OpenAI GPT models – for natural language understanding&lt;/p&gt;

&lt;p&gt;Streamlit (optional) – for a user-friendly interface&lt;/p&gt;

&lt;p&gt;⚙️ How It Works&lt;br&gt;
Document Ingestion&lt;br&gt;
→ User uploads a PDF.&lt;br&gt;
→ The content is split into chunks and embedded using OpenAI’s embeddings.&lt;/p&gt;

&lt;p&gt;Vector Storage&lt;br&gt;
→ Chroma DB stores the document embeddings for fast retrieval.&lt;/p&gt;

&lt;p&gt;Query Handling&lt;br&gt;
→ User asks a question in any language.&lt;br&gt;
→ The system retrieves the most relevant sections and passes them to the LLM.&lt;/p&gt;

&lt;p&gt;Response Generation&lt;br&gt;
→ OpenAI model responds with a natural, human-like answer.&lt;br&gt;
→ Optional: Translates to the desired language if needed.&lt;/p&gt;

&lt;p&gt;📊 Results &amp;amp; Impact&lt;br&gt;
🧠 Improved accuracy by 40% in retrieving relevant answers.&lt;/p&gt;

&lt;p&gt;💬 Increased user engagement by 35% through interactive Q&amp;amp;A instead of static search.&lt;/p&gt;

&lt;p&gt;🌍 Helped bridge language gaps for users working with multilingual documents.&lt;/p&gt;

&lt;p&gt;💡 What I Learned&lt;br&gt;
Working with LLMs and vector databases makes AI-powered search feel magical—but it requires careful tuning.&lt;/p&gt;

&lt;p&gt;Preprocessing and chunking documents is a balancing act. Too much or too little = bad results.&lt;/p&gt;

&lt;p&gt;Users prefer conversation over command lines—good UX really matters.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>📈 Building a Time Series Forecasting ML App During My Internship: A Real-World AI Experience</title>
      <dc:creator>Manognya Lokesh Reddy</dc:creator>
      <pubDate>Sun, 08 Jun 2025 18:33:56 +0000</pubDate>
      <link>https://dev.to/manognya_l_4c9cee046e761/building-a-time-series-forecasting-ml-app-during-my-internship-a-real-world-ai-experience-im3</link>
      <guid>https://dev.to/manognya_l_4c9cee046e761/building-a-time-series-forecasting-ml-app-during-my-internship-a-real-world-ai-experience-im3</guid>
      <description>&lt;p&gt;Hi everyone! 👋&lt;/p&gt;

&lt;p&gt;I’m Manognya Lokesh Reddy, a Master’s student in Artificial Intelligence at the University of Michigan-Dearborn. In this post, I’ll be sharing one of the most rewarding experiences of my career so far—building a Time Series Forecasting application during my internship at Donyati India Pvt Ltd. This project helped me understand how AI works in real business settings, and I hope it gives you a glimpse into what it’s like applying ML beyond the classroom.&lt;/p&gt;

&lt;p&gt;🧩 The Business Problem&lt;br&gt;
Forecasting plays a critical role in business decisions—whether it's predicting sales, managing inventory, or optimizing resources. The goal of our project was to create an end-to-end Time Series Forecasting application that could accurately predict future trends for multinational clients, replacing outdated manual methods.&lt;/p&gt;

&lt;p&gt;⚙️ Tech Stack &amp;amp; Tools&lt;br&gt;
Here’s what I used to build the application:&lt;/p&gt;

&lt;p&gt;Programming: Python&lt;/p&gt;

&lt;p&gt;Libraries: Pandas, NumPy, Matplotlib, Keras (LSTM)&lt;/p&gt;

&lt;p&gt;Modeling: Long Short-Term Memory (LSTM) Neural Networks&lt;/p&gt;

&lt;p&gt;DevOps: Docker (for containerization), Git &amp;amp; GitHub&lt;/p&gt;

&lt;p&gt;Deployment: AWS (for hosting the application)&lt;/p&gt;

&lt;p&gt;🏗️ My Contributions&lt;br&gt;
During my 1-year internship, I worked on:&lt;/p&gt;

&lt;p&gt;🔄 End-to-End ML Pipeline&lt;br&gt;
Created a complete ML pipeline—from data ingestion and preprocessing to model training and deployment.&lt;/p&gt;

&lt;p&gt;Implemented automated pipelines that reduced manual data processing by 40%.&lt;/p&gt;

&lt;p&gt;🧠 Model Optimization&lt;br&gt;
Used LSTM networks to capture temporal dependencies in the dataset.&lt;/p&gt;

&lt;p&gt;Tuned hyperparameters and performed feature engineering to improve accuracy.&lt;/p&gt;

&lt;p&gt;Achieved a 15% improvement in prediction accuracy over previous models.&lt;/p&gt;

&lt;p&gt;🐳 Containerization with Docker&lt;br&gt;
Packaged the entire application into Docker containers to ensure reproducibility and portability.&lt;/p&gt;

&lt;p&gt;Result: Deployment setup time was cut down by 50%.&lt;/p&gt;

&lt;p&gt;📊 Impact Across Teams&lt;br&gt;
The forecasting tool was deployed across 20+ teams, improving their operational efficiency by 25%.&lt;/p&gt;

&lt;p&gt;📚 Key Learnings&lt;br&gt;
Real-world data is messy and inconsistent. Cleaning and preprocessing are just as important as model building.&lt;/p&gt;

&lt;p&gt;LSTM models are excellent for capturing time-based trends—but require careful tuning.&lt;/p&gt;

&lt;p&gt;DevOps skills like Docker and cloud deployment are essential for taking ML models into production.&lt;/p&gt;

&lt;p&gt;It’s not just about building a model—it’s about solving a real problem that saves time and adds value.&lt;/p&gt;

&lt;p&gt;🚀 Final Thoughts&lt;br&gt;
This project gave me hands-on experience with both the technical and practical aspects of machine learning in business. It taught me that good ML is not just about the algorithm—it’s about creating systems that are scalable, efficient, and useful.&lt;/p&gt;

&lt;p&gt;If you’re a student or early-career professional, I encourage you to:&lt;/p&gt;

&lt;p&gt;Work on real datasets&lt;/p&gt;

&lt;p&gt;Think about deployment, not just accuracy&lt;/p&gt;

&lt;p&gt;Build projects that solve actual business problems&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🚀 Kicking Off My Dev Journey: Why I’m Starting This Blog</title>
      <dc:creator>Manognya Lokesh Reddy</dc:creator>
      <pubDate>Thu, 05 Jun 2025 18:13:26 +0000</pubDate>
      <link>https://dev.to/manognya_l_4c9cee046e761/kicking-off-my-dev-journey-why-im-starting-this-blog-593j</link>
      <guid>https://dev.to/manognya_l_4c9cee046e761/kicking-off-my-dev-journey-why-im-starting-this-blog-593j</guid>
      <description>&lt;p&gt;Hey Devs! 👋&lt;/p&gt;

&lt;p&gt;I’m Manognya Lokesh Reddy, currently pursuing my Master’s in Artificial Intelligence at the University of Michigan-Dearborn. Over the past few years, I’ve explored the world of software engineering, machine learning, and research-driven development—and now, I’m excited to share what I’ve learned (and what I’m still learning!) through this blog.&lt;/p&gt;

&lt;p&gt;🌱 Why I’m Starting This Blog&lt;br&gt;
Like many developers, I often hit roadblocks, debug for hours, or stumble across hidden gems that make life easier. I’ve learned that writing things down not only helps me retain concepts better but also helps others who might be facing the same challenges.&lt;/p&gt;

&lt;p&gt;This blog is my space to:&lt;/p&gt;

&lt;p&gt;Share tutorials, project walkthroughs, and code snippets&lt;/p&gt;

&lt;p&gt;Reflect on my AI/ML journey—the wins and the struggles&lt;/p&gt;

&lt;p&gt;Write about tools and tech I’m excited about&lt;/p&gt;

&lt;p&gt;Talk about student life, productivity, and career growth&lt;/p&gt;

&lt;p&gt;🧠 What You Can Expect&lt;br&gt;
Beginner-friendly AI/ML posts with real-world examples&lt;/p&gt;

&lt;p&gt;Insights into side projects I’ve worked on—how I built them, what I learned&lt;/p&gt;

&lt;p&gt;Honest thoughts on job hunting, tech interviews, and building a career as an international student&lt;/p&gt;

&lt;p&gt;Occasional deep dives into papers, tech stacks, and experimental ideas I’m testing out&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
