<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Asad Ali</title>
    <description>The latest articles on DEV Community by Asad Ali (@asadali00).</description>
    <link>https://dev.to/asadali00</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/asadali00"/>
    <language>en</language>
    <item>
      <title>Master’s Thesis | Intelligent Shot Detection &amp; Cricket Highlights using AI 🏏🤖</title>
      <dc:creator>Asad Ali</dc:creator>
      <pubDate>Thu, 05 Feb 2026 11:27:52 +0000</pubDate>
      <link>https://dev.to/asadali00/masters-thesis-intelligent-shot-detection-cricket-highlights-using-ai-3gc5</link>
      <guid>https://dev.to/asadali00/masters-thesis-intelligent-shot-detection-cricket-highlights-using-ai-3gc5</guid>
      <description>&lt;p&gt;I’m excited to share the successful completion of my MS (Artificial Intelligence) thesis titled:&lt;br&gt;
🎓 “Intelligent Shot Detection &amp;amp; Cricket Highlights”&lt;br&gt;
 📍NED University of Engineering and Technology&lt;br&gt;
 👨‍🏫 Supervised by: Uzair Abid&lt;/p&gt;

&lt;p&gt;🔍 Problem Statement:&lt;br&gt;
Cricket highlight generation is traditionally manual, time-consuming, and subjective. At the same time, deeper shot-level analytics for batsmen are rarely automated or visualized effectively.&lt;br&gt;
This research focuses on automating cricket highlight generation while also providing AI-driven batsman shot analysis, using modern Computer Vision and Transformer-based models.&lt;/p&gt;

&lt;p&gt;🧠 What I Built (End-to-End AI System)&lt;br&gt;
✅ Event-Driven Highlight Generation&lt;br&gt;
Fine-tuned YOLOv8 models to detect:&lt;br&gt;
Cricket pitch&lt;br&gt;
Score bar&lt;br&gt;
Ball movement&lt;br&gt;
Ball-wise segmentation using pitch &amp;amp; score-bar co-detection&lt;br&gt;
Ball bounce detection via trajectory (Y-axis) analysis&lt;br&gt;
Automatic extraction of highlight-worthy deliveries&lt;br&gt;
✅ Shot Classification using Transformers&lt;br&gt;
Fine-tuned ViViT (Video Vision Transformer) model&lt;br&gt;
Classifies 10 cricket shots, including:&lt;br&gt;
Cover Drive, Pull, Hook, Sweep, Lofted Shot, Straight Drive, etc.&lt;br&gt;
Achieved ~74% test accuracy on custom-labeled datasets&lt;br&gt;
✅ Interactive React-Based Dashboard&lt;br&gt;
Upload full cricket match videos&lt;br&gt;
Real-time processing status&lt;br&gt;
Visual analytics using Bar &amp;amp; Doughnut charts&lt;br&gt;
Ball-wise video previews&lt;br&gt;
Annotated frames + bounce zone visualization&lt;br&gt;
Auto-generated final highlights video&lt;/p&gt;

&lt;p&gt;🖥️ System Architecture (How It’s Used)&lt;br&gt;
I also designed a clear SOP (Standard Operating Procedure) so the system can be easily run and tested:&lt;br&gt;
🔹 Backend: Python (Flask) + YOLOv8 + Transformer models&lt;br&gt;
 🔹 Frontend: React.js dashboard&lt;br&gt;
 🔹 Workflow:&lt;br&gt;
Upload match video&lt;br&gt;
AI processes frames → detects events&lt;br&gt;
Shot classification &amp;amp; bounce analysis&lt;br&gt;
Dashboard displays analytics + highlights&lt;br&gt;
This makes the solution usable not just for research, but for analysts, coaches, and future real-time extensions.&lt;/p&gt;

&lt;p&gt;🖥️ System Architecture (How It’s Used)&lt;br&gt;
I also designed a clear SOP (Standard Operating Procedure) so the system can be easily run and tested:&lt;br&gt;
🔹 Backend: Python (Flask) + YOLOv8 + Transformer models&lt;br&gt;
 🔹 Frontend: React.js dashboard&lt;br&gt;
 🔹 Workflow:&lt;br&gt;
Upload match video&lt;br&gt;
AI processes frames → detects events&lt;br&gt;
Shot classification &amp;amp; bounce analysis&lt;br&gt;
Dashboard displays analytics + highlights&lt;br&gt;
This makes the solution usable not just for research, but for analysts, coaches, and future real-time extensions.&lt;/p&gt;

&lt;p&gt;Github Link: &lt;a href="https://lnkd.in/dtNPAQf2" rel="noopener noreferrer"&gt;https://lnkd.in/dtNPAQf2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m proud of how this project bridges AI research with real-world sports analytics.&lt;br&gt;
 Always open to feedback, collaboration, and discussions around AI + Computer Vision + Sports Tech 🚀&lt;/p&gt;

</description>
      <category>thesis</category>
      <category>ai</category>
      <category>computervision</category>
      <category>cricket</category>
    </item>
    <item>
      <title>Understanding Transformer Neural Networks: A Game-Changer in AI</title>
      <dc:creator>Asad Ali</dc:creator>
      <pubDate>Sun, 11 Aug 2024 14:09:07 +0000</pubDate>
      <link>https://dev.to/asadali00/understanding-transformer-neural-networks-a-game-changer-in-ai-2m3p</link>
      <guid>https://dev.to/asadali00/understanding-transformer-neural-networks-a-game-changer-in-ai-2m3p</guid>
      <description>&lt;p&gt;In recent years, Transformer Neural Networks have emerged as one of the most powerful tools in the field of artificial intelligence. Initially designed for natural language processing (NLP) tasks, transformers have proven to be versatile, revolutionizing various domains like computer vision, time series prediction, and beyond. In this post, we'll break down the key concepts of transformer networks and explore why they are so impactful.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Transformer Neural Network?
&lt;/h2&gt;

&lt;p&gt;A Transformer is a deep learning model that relies entirely on self-attention mechanisms, eschewing the traditional recurrent neural networks (RNNs) and convolutional neural networks (CNNs). Transformers were introduced in the paper "Attention is All You Need" by Vaswani et al. in 2017 and have since become the foundation for many state-of-the-art models, including BERT, GPT, and T5&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy158gogy1m8lz31e84nm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy158gogy1m8lz31e84nm.png" alt="Transformer Architecture" width="320" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Key Components of Transformers
&lt;/h2&gt;

&lt;p&gt;Transformers are composed of two main components: the encoder and the decoder. The encoder processes the input data, while the decoder generates the output. For simplicity, let's focus on the encoder, as it is most relevant for understanding the core mechanics of transformers.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Self-Attention Mechanism:
&lt;/h3&gt;

&lt;p&gt;The heart of a transformer is its self-attention mechanism. This allows the model to weigh the importance of different words in a sentence relative to each other. For example, in the sentence "The cat sat on the mat," the self-attention mechanism helps the model understand that "cat" and "sat" are closely related, while "the" might be less important in some contexts.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Positional Encoding:
&lt;/h3&gt;

&lt;p&gt;Unlike RNNs, transformers do not process data sequentially. To give the model a sense of the order of words, positional encodings are added to the input embeddings. These encodings help the model maintain the sequence information&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Feed-Forward Networks:
&lt;/h3&gt;

&lt;p&gt;After applying self-attention, the output is passed through a feed-forward neural network. This step helps in processing the information learned by the self-attention layer and making complex transformations to the data.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Layer Normalization and Residual Connections:
&lt;/h3&gt;

&lt;p&gt;Transformers use layer normalization to stabilize and speed up training. Residual connections help in preventing the vanishing gradient problem, making it easier to train very deep networks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dfg35qjw63ue55mobbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dfg35qjw63ue55mobbp.png" alt="Encoder-Decoder Architecture" width="800" height="723"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Applications:
&lt;/h2&gt;

&lt;p&gt;The power of transformers is evident in several cutting-edge applications&lt;/p&gt;

&lt;h3&gt;
  
  
  - Language Models:
&lt;/h3&gt;

&lt;p&gt;Models like BERT and GPT have set new benchmarks in tasks like text classification, translation, and summarization.&lt;/p&gt;

&lt;h3&gt;
  
  
  - Computer Vision:
&lt;/h3&gt;

&lt;p&gt;Vision Transformers (ViTs) are now challenging traditional CNNs in image classification and object detection tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  - Generative Models:
&lt;/h3&gt;

&lt;p&gt;Transformers are the backbone of powerful generative models like GPT-3, which can generate human-like text and code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Transformers
&lt;/h2&gt;

&lt;p&gt;For developers eager to dive into transformers, there are several frameworks and libraries available&lt;/p&gt;

&lt;h3&gt;
  
  
  - Hugging Face's Transformers:
&lt;/h3&gt;

&lt;p&gt;A popular library offering pre-trained transformer models and tools for fine-tuning them on specific tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  - PyTorch and TensorFlow:
&lt;/h3&gt;

&lt;p&gt;Both frameworks have extensive support for building and training transformer models from scratch.&lt;/p&gt;

&lt;h3&gt;
  
  
  - OpenAI's GPT Models:
&lt;/h3&gt;

&lt;p&gt;Explore language models built on transformer architecture, which can be fine-tuned for specific applications.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Transformers have unleashed the true potential of deep learning by teaching models to focus on the right parts of data, transforming how we understand language, vision, and beyond.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>transformer</category>
      <category>ai</category>
      <category>neuralnetwork</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
