<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vivek Vohra</title>
    <description>The latest articles on DEV Community by Vivek Vohra (@vivekvohra).</description>
    <link>https://dev.to/vivekvohra</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vivekvohra"/>
    <language>en</language>
    <item>
      <title>How I Hosted My First Production Site for ₹0: AWS + Cloudflare Setup</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Fri, 03 Apr 2026 14:09:40 +0000</pubDate>
      <link>https://dev.to/vivekvohra/how-i-hosted-my-first-production-site-for-0-aws-cloudflare-setup-ane</link>
      <guid>https://dev.to/vivekvohra/how-i-hosted-my-first-production-site-for-0-aws-cloudflare-setup-ane</guid>
      <description>&lt;p&gt;Running a project on localhost is straightforward. But the moment you try to deploy something real : with a custom domain, HTTPS, and a scalable architecture ,you realize how many moving parts are actually involved.&lt;/p&gt;

&lt;p&gt;For my project &lt;strong&gt;IPlusFlow&lt;/strong&gt; (an EEG-based Alzheimer’s detection platform), I needed a setup that was secure, scalable, and mirrored real industry architecture. Most importantly, I wanted to run it entirely for free.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🌍 &lt;strong&gt;Live Site:&lt;/strong&gt; &lt;a href="https://eeg.iplusflow.com/" rel="noopener noreferrer"&gt;eeg.iplusflow.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;💻 &lt;strong&gt;GitHub Repository:&lt;/strong&gt; &lt;a href="https://github.com/vivekvohra/EEG-CNN-BiLSTM" rel="noopener noreferrer"&gt;EEG-CNN-BiLSTM&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a breakdown of how I wired AWS and Cloudflare together, the architectural decisions I made, and the debugging lessons I learned along the way.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Architecture Strategy
&lt;/h2&gt;

&lt;p&gt;The goal was to build a production-style design that strictly separated storage, compute, and traffic routing. Putting everything in one place is easier, but separating concerns scales better.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; FastAPI running on AWS Lambda via API Gateway&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Static files hosted on Amazon S3&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CDN:&lt;/strong&gt; AWS CloudFront&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DNS &amp;amp; Edge:&lt;/strong&gt; Cloudflare&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Bypassing Fixed Costs
&lt;/h2&gt;

&lt;p&gt;The standard AWS approach for DNS management is Route 53. It works seamlessly, but it introduces a fixed monthly cost. To keep my infrastructure at exactly ₹0, I bypassed Route 53 entirely and handled DNS through Cloudflare.&lt;/p&gt;

&lt;p&gt;I purchased the domain, pointed the nameservers to Cloudflare, and managed all routing for subdomains like &lt;code&gt;eeg.iplusflow.com&lt;/code&gt; and &lt;code&gt;api.iplusflow.com&lt;/code&gt; from there. It was a simple architectural decision, but it was the most impactful step in eliminating recurring overhead.&lt;/p&gt;




&lt;h2&gt;
  
  
  Challenges and Learnings
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. The AWS SSL Region Trap
&lt;/h3&gt;

&lt;p&gt;Because my backend was deployed in the Mumbai region (&lt;code&gt;ap-south-1&lt;/code&gt;), I generated a wildcard certificate (&lt;code&gt;*.iplusflow.com&lt;/code&gt;) there. It attached to the API Gateway perfectly.&lt;/p&gt;

&lt;p&gt;However, when I tried to attach the same certificate to CloudFront for the frontend, it simply didn't appear in the dropdown. After digging through the documentation, I learned that &lt;strong&gt;CloudFront strictly requires certificates to be generated in &lt;code&gt;us-east-1&lt;/code&gt; (N. Virginia)&lt;/strong&gt;, regardless of where your actual infrastructure lives. I had to provision a duplicate certificate in &lt;code&gt;us-east-1&lt;/code&gt; specifically for the CDN. It is a strict AWS requirement that isn't immediately obvious when you start.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Cloudflare Proxy Conflict
&lt;/h3&gt;

&lt;p&gt;When adding CNAME records in Cloudflare, I initially left the proxy setting enabled (the "orange cloud"). This immediately resulted in SSL handshake failures between Cloudflare and AWS.&lt;/p&gt;

&lt;p&gt;In hindsight, the reason was clear: API Gateway and CloudFront were already terminating HTTPS using their own AWS ACM certificates. Cloudflare was attempting to sit in the middle and re-handle the SSL, creating a conflict. The fix was to switch those DNS records to &lt;strong&gt;"DNS Only" (the "grey cloud")&lt;/strong&gt;. Once the proxy was disabled, the handshake issues were resolved.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Cross-Origin (CORS) Blocks on Subdomains
&lt;/h3&gt;

&lt;p&gt;My frontend (&lt;code&gt;eeg.iplusflow.com&lt;/code&gt;) was designed to fetch presigned URLs from the backend (&lt;code&gt;api.iplusflow.com&lt;/code&gt;) to upload EEG files directly to S3. The API responded perfectly, but the browser blocked the actual file uploads.&lt;/p&gt;

&lt;p&gt;The issue was CORS. Browsers treat different subdomains as entirely different origins. To fix this, I had to explicitly update the S3 bucket's CORS configuration to allow &lt;code&gt;Origin: https://eeg.iplusflow.com&lt;/code&gt; and restrict the method to &lt;code&gt;PUT&lt;/code&gt;. After applying that specific policy, the client-side uploads worked.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Use Both Cloudflare and CloudFront?
&lt;/h2&gt;

&lt;p&gt;It might seem redundant to use CloudFront when Cloudflare already provides CDN capabilities. I considered dropping CloudFront, but keeping it solved two critical architectural problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;S3 Security:&lt;/strong&gt; Without CloudFront, the S3 bucket would have to be public to serve the frontend files. By using CloudFront, I could keep the bucket entirely private, restricting access exclusively through the CDN via Origin Access Control (OAC).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSL Compatibility:&lt;/strong&gt; CloudFront integrates flawlessly with AWS certificates. Trying to route Cloudflare directly to a private S3 bucket often introduces complex SSL and routing headaches.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The final flow cleanly separated responsibilities: Cloudflare handles DNS, CloudFront securely fetches from S3, and the S3 bucket remains locked down.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;This setup achieved exactly what I needed: a zero-cost infrastructure, HTTPS across both frontend and backend, and a clean separation of layers.&lt;/p&gt;

&lt;p&gt;More than just getting a project live, this process forced me to understand the lifecycle of a web request i.e. how DNS resolves, where SSL terminates, and how CDNs interact with private storage. If you are deploying your first serious project, I highly recommend skipping the one-click hosting platforms and trying a manual setup like this. It takes more effort, but the technical understanding you walk away with is invaluable.&lt;/p&gt;

&lt;p&gt;If you are building something similar and hit a wall, feel free to reach out. I am still learning this myself, and documenting the process is part of that journey.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>iPlusCode - a small Chrome extension to make Codeforces a bit nicer</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Sat, 01 Nov 2025 20:57:48 +0000</pubDate>
      <link>https://dev.to/vivekvohra/ipluscode-a-small-chrome-extension-to-make-codeforces-a-bit-nicer-4cme</link>
      <guid>https://dev.to/vivekvohra/ipluscode-a-small-chrome-extension-to-make-codeforces-a-bit-nicer-4cme</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsue90oltfk17a4itys4o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsue90oltfk17a4itys4o.png" alt=" " width="128" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  iPlusCode - Bringing Peer Learning to Codeforces Practice
&lt;/h2&gt;

&lt;p&gt;I spend a lot of time on &lt;strong&gt;Codeforces&lt;/strong&gt;, and one thing I always wanted was:&lt;br&gt;
“show me how my friends solved this exact problem, right here, without leaving the page.”&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;iPlusCode&lt;/strong&gt; — a Chrome extension that sits on Codeforces pages and adds some extra tools for practice.&lt;/p&gt;

&lt;p&gt;In this post, I’ll share what the extension does, how it works (especially the “Friend's code” feature), and some lessons I learned building it.&lt;/p&gt;


&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt; &lt;strong&gt;Bookmark problems&lt;/strong&gt; from Codeforces&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Add notes&lt;/strong&gt; to a problem (saved in Chrome sync)&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Filter/sort&lt;/strong&gt; problems by rating or tags&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Sync solved problems&lt;/strong&gt; using the Codeforces API&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Hide tags&lt;/strong&gt; if you don’t want spoilers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Main thing&lt;/strong&gt;: &lt;strong&gt;view friends’ accepted codes in a modal&lt;/strong&gt; on the same page.&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  👥 Friends Solutions – Learning from Peers
&lt;/h2&gt;

&lt;p&gt;One unique aspect of competitive programming is learning from others.&lt;br&gt;
I often found myself checking how a friend solved a problem after a contest.&lt;br&gt;
&lt;strong&gt;iPlusCode&lt;/strong&gt; makes this easier by integrating friend's solutions right into the problem page.&lt;/p&gt;

&lt;p&gt;When you click &lt;strong&gt;“Show Codes”&lt;/strong&gt; under &lt;strong&gt;Friends Accepted Codes&lt;/strong&gt;, the extension fetches the latest accepted submission for that problem from each of your Codeforces friends (up to a limit).&lt;br&gt;
It then opens a neat modal dialog on the page, showing each friend’s username and their code solution, with syntax highlighting and line numbers.&lt;br&gt;
You can scroll through and see how different people approached the same task - without leaving the page or manually searching on Codeforces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygshktm8ube92k1tz717.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygshktm8ube92k1tz717.png" alt="Modal showing friends’ code solutions for a problem" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h3&gt;
  
  
  🔎 How does it know who my friends are?
&lt;/h3&gt;

&lt;p&gt;If you’re logged into Codeforces, the extension can retrieve your friend list.&lt;br&gt;
On setup, iPlusCode scrapes your &lt;code&gt;/friends&lt;/code&gt; page (using your session cookies) to get all the handles in your “My Friends” list.&lt;br&gt;
It stores this list (up to 20 friends) in Chrome Sync storage as &lt;code&gt;cf_friends&lt;/code&gt;.&lt;/p&gt;


&lt;h3&gt;
  
  
  ⚙️ How does it fetch the code?
&lt;/h3&gt;

&lt;p&gt;iPlusCode uses a mix of the official API and careful HTML parsing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For each friend, it first calls:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   https://codeforces.com/api/contest.status?contestId=...&amp;amp;handle=...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This checks if the friend has an accepted submission (“OK”) for the current problem.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If nothing found, it falls back to:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   https://codeforces.com/api/user.status?handle=...&amp;amp;from=1&amp;amp;count=1000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Once it finds the submission ID, the extension fetches the &lt;strong&gt;submission page&lt;/strong&gt; directly - the same one you’d see by clicking &lt;em&gt;“View Submission”&lt;/em&gt; on Codeforces.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It parses out the code text from the page’s DOM and displays it in a modal using &lt;strong&gt;Google Prettify&lt;/strong&gt; for syntax highlighting.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Behind the scenes, iPlusCode fetches &lt;strong&gt;only when you click&lt;/strong&gt;, not on every page load - and for good reason.&lt;/p&gt;


&lt;h2&gt;
  
  
  🧠 Lessons Learned
&lt;/h2&gt;

&lt;p&gt;When I first implemented this feature, I thought:&lt;br&gt;
&lt;em&gt;“What if the extension automatically pre-fetches my friend's solutions in the background every time I open a problem page?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Sounds nice, right? No waiting when you click “Show Codes.”&lt;br&gt;
Well… it didn’t go as planned.&lt;/p&gt;

&lt;p&gt;During early testing, Codeforces started showing &lt;strong&gt;unusual activity warnings&lt;/strong&gt;.&lt;br&gt;
Even with a &lt;strong&gt;10-second retry delay&lt;/strong&gt;, the auto-fetch system looked like a crawler.&lt;br&gt;
Eventually, it triggered anti-bot protection, and I got a &lt;strong&gt;temporary suspension&lt;/strong&gt; 😅.&lt;/p&gt;

&lt;p&gt;That was the point I realized: &lt;strong&gt;don’t crawl Codeforces automatically&lt;/strong&gt;.&lt;br&gt;
Here’s what I changed and learned:&lt;/p&gt;


&lt;h3&gt;
  
  
  ✅ Takeaway 1: Use on-demand fetching
&lt;/h3&gt;

&lt;p&gt;Now, iPlusCode fetches data &lt;strong&gt;only when you click “Show Codes.”&lt;/strong&gt;&lt;br&gt;
No background scraping.&lt;br&gt;
It respects Codeforces’ rate limits and makes fewer, intentional requests.&lt;/p&gt;


&lt;h3&gt;
  
  
  ✅ Takeaway 2: Throttle and limit requests
&lt;/h3&gt;

&lt;p&gt;Even on-demand, I limited it to about 20 friends and added a short delay (&lt;code&gt;sleep(700)&lt;/code&gt;) between each fetch.&lt;br&gt;
This keeps it human-like, avoids spamming, and still feels smooth for the user.&lt;/p&gt;


&lt;h3&gt;
  
  
  ✅ Takeaway 3: Cache results
&lt;/h3&gt;

&lt;p&gt;After fetching, iPlusCode caches the results for about &lt;strong&gt;10 minutes&lt;/strong&gt; in Chrome Sync:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;friendCache&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;problemKey&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you click again within that time, it just shows cached data instead of hitting the API again.&lt;/p&gt;




&lt;h3&gt;
  
  
  ✅ Takeaway 4: Be careful with HTML parsing
&lt;/h3&gt;

&lt;p&gt;Codeforces doesn’t have APIs for everything (like friends list or raw code),&lt;br&gt;
so I had to rely on parsing the page.&lt;br&gt;
I made sure to only read specific elements (like the “My Friends” table) and handle missing cases gracefully.&lt;br&gt;
That way, even if Codeforces changes its layout slightly, the extension won’t break completely.&lt;/p&gt;




&lt;h3&gt;
  
  
  ✅ Takeaway 5: Chrome Extension quirks
&lt;/h3&gt;

&lt;p&gt;This was built using &lt;strong&gt;Manifest V3&lt;/strong&gt;, so I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;chrome.storage.sync&lt;/code&gt; for saving data&lt;/li&gt;
&lt;li&gt;secure DOM creation (no direct HTML injection)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fetch(..., { credentials: 'include' })&lt;/code&gt; to use the logged-in Codeforces session
That last trick let the extension access private data (like submissions) &lt;strong&gt;without needing API keys&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧩 Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Building &lt;strong&gt;iPlusCode&lt;/strong&gt; was a rewarding project that enhanced my own practice on Codeforces.&lt;br&gt;
Now I can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bookmark problems I want to revisit&lt;/li&gt;
&lt;li&gt;Keep track of what I’ve solved&lt;/li&gt;
&lt;li&gt;Take notes for future reference&lt;/li&gt;
&lt;li&gt;Peek at how my friends solved the same problem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now,I can see different coding styles and ideas instantly - it’s like learning from friends without leaving the page.&lt;/p&gt;




&lt;p&gt;If you’d like to try it:&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;&lt;a href="https://chromewebstore.google.com/detail/dldgiedjpmpfakogeeipicafjngnefej?utm_source=item-share-cb" rel="noopener noreferrer"&gt;Chrome Web Store – iPlusCode&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
💻 &lt;strong&gt;&lt;a href="https://github.com/vivekvohra/iPlusCode" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I’m open to feedback and ideas for new features.&lt;br&gt;
Hope this helps make your Codeforces journey more organized and fun.&lt;br&gt;
Happy coding!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>learning</category>
      <category>programming</category>
    </item>
    <item>
      <title>Deploying a CNN-BiLSTM Model on AWS Lambda</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Thu, 18 Sep 2025 13:23:19 +0000</pubDate>
      <link>https://dev.to/vivekvohra/deploying-a-cnn-bilstm-model-on-aws-lambda-4kcj</link>
      <guid>https://dev.to/vivekvohra/deploying-a-cnn-bilstm-model-on-aws-lambda-4kcj</guid>
      <description>&lt;h1&gt;
  
  
  Deploying a CNN-BiLSTM Model on AWS Lambda
&lt;/h1&gt;

&lt;p&gt;Deploying my deep learning model to production sounded straightforward at first. I had a Convolutional Neural Network + Bidirectional LSTM (CNN-BiLSTM) model for EEG-based Alzheimer’s detection, and I wanted to expose it via a serverless API on AWS. &lt;br&gt;
But doing so leads to a series of mistakes, and in this post, I try to document the mistakes I made, for my future self and those who might be trying it for the first time and are stuck.&lt;/p&gt;
&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Before going into the mistakes, let’s briefly discuss the deep learning model. It combines CNN and BiLSTM for Alzheimers’ detection based on EEG data.&lt;/p&gt;

&lt;p&gt;Now this app lets users upload EEG .set files and get an Alzheimers’, Frontotemporal dementia, and Healthy prediction with a confidence score. Uploaded files go directly to S3, and then a serverless Lambda (containerized with TensorFlow + MNE) pulls that file, preprocesses it, runs inference, and returns JSON to the browser.&lt;/p&gt;
&lt;h2&gt;
  
  
  High-Level Architecture
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The user uploads the EEG file to the browser.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A presigned URL&lt;/strong&gt; is issued so the browser can upload directly to S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The browser uploads the file to S3 (no server is in the middle).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Browser calls &lt;code&gt;/predict&lt;/code&gt;, passing the S3 object key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lambda&lt;/strong&gt; (Dockerized TF + MNE) downloads, preprocesses, and infers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;JSON&lt;/strong&gt; response returns: &lt;code&gt;{ predicted_class, confidence }&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0pngzg80ci5cqrnz60x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0pngzg80ci5cqrnz60x.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  AWS setup (essentials)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;S3 bucket:&lt;/strong&gt; Private; CORS enabled for the frontend domain; object PUT via presigned URLs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IAM:&lt;/strong&gt; Execution role for Lambda with S3 read (and PutObject for generating presigned URLs).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ECR:&lt;/strong&gt; Push the container image (TensorFlow + MNE + model).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lambda (container):&lt;/strong&gt; Adequate memory (e.g., 2–4 GB+), timeout (e.g., 120s+), env var for BUCKET name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Gateway or Lambda Function URL:&lt;/strong&gt; Public HTTPS endpoint with &lt;code&gt;CORS&lt;/code&gt; enabled.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  What is a Presigned URL?
&lt;/h2&gt;

&lt;p&gt;Now, before moving on, what exactly is a presigned URL?&lt;/p&gt;

&lt;p&gt;So, an S3 presigned URL is a temporary, signed link that lets someone upload or download a file directly to/from S3 without needing AWS credentials.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Usually, only IAM users/roles with the right S3 permissions can upload files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;But the browser (frontend) shouldn’t have AWS keys hardcoded (that’s unsafe ).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Instead, the backend Lambda/Flask app generates a presigned URL with an expiry (1 hour).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The browser then uploads the file directly to S3 using that link, skipping the backend.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, for this, we have the &lt;code&gt;/presign&lt;/code&gt; route in lambda_function.py&lt;/p&gt;

&lt;p&gt;So before uploading, the frontend asks the backend for a temporary signed upload link, i.e.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET /presign?key=uploads/myfile.set
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Backend then generates the temp. URL thru boto3 and returns JSON.&lt;/p&gt;

&lt;p&gt;Then, the frontend(browser) with URL uploads directly to S3.&lt;/p&gt;

&lt;p&gt;But then you might question Why go through this roundabout way when we could have just uploaded it to Lambda.&lt;/p&gt;

&lt;p&gt;Well, that’s because the API Gateway payload cap is 10 MB. And so if your upload is bigger than that (which is the case here) is rejected. Whereas the S3 method is built for scale.&lt;/p&gt;
&lt;h2&gt;
  
  
  Mistakes
&lt;/h2&gt;

&lt;p&gt;Now, even with an online guide and ChatGPT’s help, I made several mistakes, and below is the list of them :&lt;/p&gt;
&lt;h3&gt;
  
  
  Mistake 1: ECR “image index” vs “image”
&lt;/h3&gt;

&lt;p&gt;When uploading the files from my local machine to ECR, I used &lt;code&gt;Docker Buildx&lt;/code&gt; by default. This leads to  Artifact type: Image Index in ECR. However, the Lambda only accepts a single-image manifest (Linux/Amd64), which results in the error.&lt;/p&gt;

&lt;p&gt;So to fix this, we had to force the use of the classic Builder, so the result is a single manifest (not an index).&lt;/p&gt;
&lt;h3&gt;
  
  
  Mistake 2: Use compatible versions
&lt;/h3&gt;

&lt;p&gt;The newer &lt;code&gt;Tensorflow (2.17+)&lt;/code&gt; pulls&lt;code&gt;Keras 3&lt;/code&gt;, which needs optree.&lt;/p&gt;

&lt;p&gt;And the AWS base lambda images don't have any prebuilt optree. And now this optree uses &lt;code&gt;C++&lt;/code&gt;. So when we pip install tensorflow, it tries to compile optree from source. To compile, you need &lt;code&gt;gcc&lt;/code&gt;,&lt;code&gt;g++&lt;/code&gt;,&lt;code&gt;cmake&lt;/code&gt; , and &lt;code&gt;Unix Makefiles&lt;/code&gt;. These aren’t in the standard Lambda image. &lt;br&gt;
This leads to a compilation error, and if we want to include them, it leads to a much bigger Docker image.&lt;br&gt;
So to fix this, I used &lt;code&gt;TensorFlow 2.15&lt;/code&gt; (Keras v2 bundled). This has no Keras 3, thus no optree.&lt;/p&gt;
&lt;h3&gt;
  
  
  Mistake 3: Missing Permissions for S3 Access
&lt;/h3&gt;

&lt;p&gt;So while setting up the IAM role, I didn’t realize that  I needed to explicitly allow the Lambda’s role to read/write on the bucket. After some head-scratching and checking error logs, it finally dawned on me. &lt;/p&gt;

&lt;p&gt;So I  updated my Lambda’s execution role to include S3 access permissions (allowing GetObject and PutObject on my bucket). Only then could my function fetch the uploaded EEG files from S3 and save results if needed.&lt;/p&gt;
&lt;h3&gt;
  
  
  Mistake 4: Forgetting About CORS (Cross-Origin Resource Sharing)
&lt;/h3&gt;

&lt;p&gt;So my browser would try to call the Lambda’s API Gateway endpoint, but it was the browser’s CORS policy.&lt;/p&gt;

&lt;p&gt;This was super annoying because the error isn’t from my code or AWS, but from the browser for security reasons. The culprit was me forgetting to enable CORS on my API Gateway and S3 bucket.&lt;/p&gt;

&lt;p&gt;I eventually discovered that I needed to configure CORS so that my static site could call the API and upload to S3. In fact, I even added a note in my app’s footer reminding future me to do this: “Make sure CORS is enabled on API Gateway and your S3 bucket.”&lt;/p&gt;

&lt;p&gt;After enabling CORS in API Gateway (allowing my domain/localhost and the necessary HTTP methods) and adding an appropriate CORS policy on the S3 bucket, the front-end and back-end finally started communicating properly.&lt;/p&gt;
&lt;h3&gt;
  
  
  Mistake 5: Misconfiguring the API Gateway (AKA "Why Am I Getting 404?")
&lt;/h3&gt;

&lt;p&gt;After all these, when I hit my api, I got &lt;code&gt;HTTP 404&lt;/code&gt; errors. So I double-checked my Lambda code – the functions for &lt;code&gt;/predict&lt;/code&gt; and &lt;code&gt;/health&lt;/code&gt; existed.&lt;/p&gt;

&lt;p&gt;But then I realized that the mistake was in my API Gateway configuration. I had not set up the resource paths or integrations properly for the routes.&lt;br&gt;
API Gateway wasn’t forwarding &lt;code&gt;/predict&lt;/code&gt; or &lt;code&gt;/health&lt;/code&gt; to my Lambda at all, hence the 404s. &lt;br&gt;
So when I hit &lt;code&gt;…/health&lt;/code&gt; from the browser, the API Gateway was actually expecting something like&lt;code&gt;…/default/health&lt;/code&gt;, which obviously didn’t exist.&lt;/p&gt;

&lt;p&gt;Once I spotted this, I went back into the AWS console, fixed the route definitions (making sure they matched what my client was calling), and deployed the API to the correct stage.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;So while deploying, I made several mistakes like  Docker manifests, dependency mismatches, missing IAM permissions, CORS errors, and API Gateway misconfigurations. Each of these mistakes slowed me down, but also forced me to understand how AWS Lambda, S3, and API Gateway really work together.&lt;/p&gt;

&lt;p&gt;The final setup is simple for the user—upload an EEG file, wait a few seconds, and get a prediction with confidence. But behind the scenes, there’s a careful system: S3 for storage, presigned URLs for secure uploads, Lambda containers for inference, and API Gateway as the bridge.&lt;/p&gt;

&lt;p&gt;So if you’re taking your first model to the cloud, expect some bumps—but also expect to come out the other side with much sharper engineering instincts. &lt;br&gt;
👉 You can try the deployed app here: &lt;a href="https://az-eeg-site-109598917777.s3.ap-south-1.amazonaws.com/index.html" rel="noopener noreferrer"&gt;Link&lt;/a&gt;&lt;br&gt;
👉 Code is available on GitHub&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/vivekvohra" rel="noopener noreferrer"&gt;
        vivekvohra
      &lt;/a&gt; / &lt;a href="https://github.com/vivekvohra/EEG-CNN-BiLSTM" rel="noopener noreferrer"&gt;
        EEG-CNN-BiLSTM
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Deep Learning for Alzheimer’s Detection from EEG Data 
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;EEG-CNN-BiLSTM (AWS Lambda + S3 demo)&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;End-to-end demo that serves a Keras &lt;strong&gt;CNN-BiLSTM&lt;/strong&gt; EEG classifier from &lt;strong&gt;AWS Lambda (container image)&lt;/strong&gt; with a &lt;strong&gt;static frontend on S3&lt;/strong&gt;
You can upload an EEGLAB &lt;code&gt;.set&lt;/code&gt; file (or run a demo prediction), and get class probabilities back.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Live&lt;/strong&gt;:&lt;a href="https://az-eeg-site-109598917777.s3.ap-south-1.amazonaws.com/index.html" rel="nofollow noopener noreferrer"&gt;https://az-eeg-site-109598917777.s3.ap-south-1.amazonaws.com/index.html&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Blog&lt;/strong&gt;:&lt;a href="https://dev.to/vivekvohra/deploying-a-cnn-bilstm-model-on-aws-lambda-4kcj" rel="nofollow"&gt;https://dev.to/vivekvohra/deploying-a-cnn-bilstm-model-on-aws-lambda-4kcj&lt;/a&gt;&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;EG-CNN-BiLSTM/
├── backend/                  # Lambda + Docker code
│   ├── lambda_function.py
│   ├── preprocess.py
│   ├── Dockerfile.dockerfile
│   ├── requirements.txt
│   └── model/
│       └── alzheimer_eeg_cnn_bilstm_model.h5
├── frontend/                 # S3 static site
│   ├── index.html
│   ├── app.js
│   └── style.css
├── research/                 # papers &amp;amp; notebooks
│   ├── train.ipynb
│   └── conference.pdf
├── LICENSE
└── README.md
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Demo video&lt;/h2&gt;
&lt;/div&gt;

  
    
    

    &lt;span class="m-1"&gt;Recording.2025-09-18.003432.mp4&lt;/span&gt;
    
  

  

  



&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;What’s inside&lt;/h2&gt;

&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Model&lt;/strong&gt;: Keras &lt;code&gt;.h5&lt;/code&gt; CNN-BiLSTM saved with TF 2.x (pinned to TF 2.15 at runtime).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Backend&lt;/strong&gt; (&lt;code&gt;backend/&lt;/code&gt;):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;lambda_function.py&lt;/code&gt; – Flask app wrapped by &lt;code&gt;serverless-wsgi&lt;/code&gt; for API Gateway HTTP API.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;preprocess.py&lt;/code&gt; – loads &lt;code&gt;.set&lt;/code&gt; and prepares input…&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/vivekvohra/EEG-CNN-BiLSTM" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>cloud</category>
      <category>beginners</category>
      <category>machinelearning</category>
      <category>aws</category>
    </item>
    <item>
      <title>Tideman Voting Algorithm: A Graph-Based Approach to Elections</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Sat, 13 Sep 2025 16:02:53 +0000</pubDate>
      <link>https://dev.to/vivekvohra/tideman-voting-algorithm-a-graph-based-approach-to-elections-330c</link>
      <guid>https://dev.to/vivekvohra/tideman-voting-algorithm-a-graph-based-approach-to-elections-330c</guid>
      <description>&lt;h1&gt;
  
  
  Understanding the Tideman Voting Algorithm: A Graph-Based Approach
&lt;/h1&gt;

&lt;p&gt;The Tideman algorithm, also known as "ranked pairs," is a sophisticated voting system that leverages graph theory to determine election winners. By allowing voters to rank candidates in order of preference, it captures more nuanced voter intentions than simple plurality voting systems. This blog post explores the theoretical foundations of the Tideman algorithm, focusing on its graph-based approach and key mechanisms.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Graph Theory Foundation
&lt;/h2&gt;

&lt;p&gt;At its core, the Tideman algorithm represents an election as a directed graph:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nodes&lt;/strong&gt; represent candidates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edges&lt;/strong&gt; represent preferences of one candidate over another&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach creates what's called an &lt;strong&gt;adjacency matrix&lt;/strong&gt; where locked preferences between candidates are recorded. In the matrix example shown in the image, "true" values indicate a locked preference (or edge) between candidates.&lt;/p&gt;

&lt;p&gt;For example, in the matrix shown, we can see that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Candidate 0 is preferred over candidate 1&lt;/li&gt;
&lt;li&gt;Candidate 2 is preferred over candidate 0&lt;/li&gt;
&lt;li&gt;Candidate 2 is preferred over candidate 1&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These locked preferences collectively form the final graph that determines the winner.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Preferences Array
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;preferences array&lt;/strong&gt; is a fundamental data structure in the Tideman algorithm. It records how many voters prefer one candidate over another:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;preferences[i][j]&lt;/code&gt; represents the number of voters who prefer candidate &lt;code&gt;i&lt;/code&gt; over candidate &lt;code&gt;j&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For an election with three candidates (represented as 0, 1, and 2), the preferences array might look like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;0&lt;/th&gt;
&lt;th&gt;1&lt;/th&gt;
&lt;th&gt;2&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In this example, 5 voters prefer candidate 0 over candidate 1, 3 voters prefer candidate 0 over candidate 2, and so on. The diagonal cells remain empty because a candidate cannot be compared with themselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Recording Voter Preferences with the Ranks Array
&lt;/h2&gt;

&lt;p&gt;Tideman tracks individual voter preferences using a &lt;strong&gt;ranks array&lt;/strong&gt;. For each voter, this array stores their candidate rankings where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The index represents the rank (0 being the highest preference)&lt;/li&gt;
&lt;li&gt;The value represents the candidate ID&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, in an election with Alice (0), Bob (1), Charlie (2), and David (3) as candidates, if a voter prefers Bob most, followed by Alice, then David, and finally Charlie, their ranks array would be:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ranks = 1, ranks[1] = 0, ranks[2] = 3, ranks[3] = 2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This means at rank 0 (highest preference), they chose candidate 1 (Bob), at rank 1 they chose candidate 0 (Alice), and so on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sorting Candidate Pairs by Strength of Victory
&lt;/h2&gt;

&lt;p&gt;After collecting all votes, the algorithm creates "pairs" of candidates where one is preferred over another. These pairs are then sorted by strength of preference (how many more voters prefer one candidate over the other).&lt;/p&gt;

&lt;p&gt;The algorithm uses merge sort to efficiently organize these pairs, ensuring that the strongest preferences are considered first when building the final graph.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lock Pairs Function: Avoiding Cycles
&lt;/h2&gt;

&lt;p&gt;The most crucial part of Tideman is the &lt;strong&gt;lock_pairs&lt;/strong&gt; function, which determines which preferences to "lock in" to the final graph. There are two main approaches to implementing this function:&lt;/p&gt;

&lt;h3&gt;
  
  
  Method 1: Column Check
&lt;/h3&gt;

&lt;p&gt;A graph remains non-cyclic if at least one column in the adjacency matrix contains all "false" values. This is an observation that requires checking all columns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Method 2: Cycle Detection
&lt;/h3&gt;

&lt;p&gt;The more sophisticated approach is to check if locking a pair would create a cycle in the graph. A cycle occurs when following the preference edges leads back to the starting candidate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cycle Detection in Practice
&lt;/h2&gt;

&lt;p&gt;Let's see how cycle detection works with an example:&lt;/p&gt;

&lt;p&gt;Candidates = [a, b, c, d]&lt;br&gt;
Sorted Pairs = [(d, a), (a, b), (b, c), (c, a), (d, b), (d, c)]&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Looking at pair (d, a): No prior locked pairs, so we can lock this. Graph: &lt;code&gt;d → a&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Looking at pair (a, b): No cycle created, so we lock it. Graph: &lt;code&gt;d → a → b&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Looking at pair (b, c): No cycle created, so we lock it. Graph: &lt;code&gt;d → a → b → c&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Looking at pair (c, a): This would create a cycle &lt;code&gt;c → a → b → c&lt;/code&gt;, so we don't lock it.&lt;/li&gt;
&lt;li&gt;Looking at pair (d, b): No cycle created, so we lock it. Graph now includes &lt;code&gt;d → b&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Looking at pair (d, c): No cycle created, so we lock it. Final graph includes &lt;code&gt;d → c&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The final graph looks like:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;a&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;➚&lt;/td&gt;
&lt;td&gt;↓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;d&lt;/td&gt;
&lt;td&gt;➙&lt;/td&gt;
&lt;td&gt;b&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;➘&lt;/td&gt;
&lt;td&gt;↓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;c&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The key insight is that we only skipped locking pair (c, a) because it would have created a cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding the Winner
&lt;/h2&gt;

&lt;p&gt;After locking all valid pairs, the winner is the candidate with no incoming edges in the final graph - the "source" of the directed graph. This candidate is not beaten by any other candidate according to the locked preferences.&lt;/p&gt;

&lt;p&gt;In our example, candidate d has no incoming edges, making it the winner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Theoretical Significance
&lt;/h2&gt;

&lt;p&gt;The Tideman algorithm elegantly solves several problems in voting theory:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It finds a Condorcet winner when one exists (a candidate who would win head-to-head against all others)&lt;/li&gt;
&lt;li&gt;It provides a reasonable approximation when no Condorcet winner exists&lt;/li&gt;
&lt;li&gt;It satisfies the Independence of Irrelevant Alternatives criterion better than many other voting systems&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By using graph theory to represent voter preferences, Tideman creates a more complete picture of the election results than simpler methods like plurality voting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Tideman algorithm represents a sophisticated application of graph theory to voting systems. By considering the full spectrum of voter preferences and carefully constructing a directed graph that avoids cycles, it provides a more nuanced and representative election outcome.&lt;/p&gt;

&lt;p&gt;Understanding the theoretical foundations of Tideman - from preference arrays and rank recording to cycle detection and winner determination - gives us insight into how modern voting systems can better capture voter intent and produce fairer election results.&lt;/p&gt;

</description>
      <category>dsa</category>
      <category>programming</category>
      <category>computerscience</category>
      <category>cs50</category>
    </item>
    <item>
      <title>Deploying Tideman Election App on AWS EC2 with Docker</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Sat, 13 Sep 2025 15:59:03 +0000</pubDate>
      <link>https://dev.to/vivekvohra/deploying-tideman-election-app-on-aws-ec2-with-docker-1ff2</link>
      <guid>https://dev.to/vivekvohra/deploying-tideman-election-app-on-aws-ec2-with-docker-1ff2</guid>
      <description>&lt;p&gt;I recently took my Tideman app from my laptop to the cloud, and getting it live on AWS EC2 with Docker turned out to be simpler than I expected.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting Up my EC2 Instance
&lt;/h2&gt;

&lt;p&gt;First, I launched an &lt;strong&gt;AWS EC2 instance&lt;/strong&gt; with Ubuntu as the operating system. I chose a small instance (free tier eligible) since my app isn’t too heavy.&lt;/p&gt;

&lt;p&gt;After launching, I configured the instance’s &lt;strong&gt;Security Group&lt;/strong&gt; to allow the right ports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Port 22 (SSH)&lt;/strong&gt; so I could connect to the server’s terminal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Port 80 (HTTP)&lt;/strong&gt; so the web app would be accessible in a browser.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Afterwards, I installed Docker and cloned my code to the EC2 instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# On the EC2 instance&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; docker.io git

&lt;span class="c"&gt;# (Optional) clone your repo and move into it&lt;/span&gt;
git clone &amp;lt;your-repo-url&amp;gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; &amp;lt;your-repo-folder&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Writing a Multi-Stage Dockerfile
&lt;/h2&gt;

&lt;p&gt;Now for the main part: &lt;strong&gt;containerizing the app with Docker&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I created a file named &lt;code&gt;Dockerfile&lt;/code&gt; in the project directory. Because my app has two components (C++ and Python), I used a &lt;strong&gt;multi-stage Docker build&lt;/strong&gt; to keep the final image lean. Here’s how I set it up:&lt;/p&gt;
&lt;h3&gt;
  
  
  Stage 1: Build the C++ Program
&lt;/h3&gt;

&lt;p&gt;In the first stage, I used a Docker image with a C++ compiler to compile the algorithm:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stage 1: Compile C++ code&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;gcc:latest&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;builder       # Use GCC compiler image&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Copy in the C++ source code and compile it&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; my_algorithm.cpp /app/my_algorithm.cpp&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;g++ my_algorithm.cpp &lt;span class="nt"&gt;-o&lt;/span&gt; my_algorithm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This stage grabs the official GCC image, copies my C++ source file into it, and runs &lt;code&gt;g++&lt;/code&gt; to compile the code into an executable (&lt;code&gt;my_algorithm&lt;/code&gt;).&lt;/p&gt;
&lt;h3&gt;
  
  
  Stage 2: Set Up Python + Flask
&lt;/h3&gt;

&lt;p&gt;For the second stage, I wanted a lightweight Python environment to run Flask and the compiled binary. I chose a slim Python image:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stage 2: Run Flask app with Gunicorn&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;python:3.10-slim&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;final   # Use a small Python image&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Copy the compiled C++ binary from the builder stage&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=builder /app/my_algorithm /app/my_algorithm&lt;/span&gt;

&lt;span class="c"&gt;# Copy the Flask app code and any other files&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; app.py /app/app.py&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; index.html /app/index.html&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt /app/requirements.txt&lt;/span&gt;

&lt;span class="c"&gt;# Install Flask (and Gunicorn) via requirements&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# Expose port 5000 and start the app using Gunicorn&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 5000&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["gunicorn", "-b", "0.0.0.0:5000", "app:app"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;What’s happening in Stage 2:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We switch to a &lt;strong&gt;Python 3 slim&lt;/strong&gt; image (much smaller than a full Ubuntu or GCC image).&lt;/li&gt;
&lt;li&gt;Copy the &lt;strong&gt;compiled C++ binary&lt;/strong&gt; from stage 1 into this new image. This way, the Python container can use the C++ program without needing a compiler.&lt;/li&gt;
&lt;li&gt;Copy over the Flask app (&lt;code&gt;app.py&lt;/code&gt;), the &lt;code&gt;index.html&lt;/code&gt;, and a &lt;code&gt;requirements.txt&lt;/code&gt; listing Python dependencies.&lt;/li&gt;
&lt;li&gt;Install dependencies: I included Flask (and Gunicorn for the server) in &lt;code&gt;requirements.txt&lt;/code&gt;, so &lt;code&gt;pip install -r requirements.txt&lt;/code&gt; pulls those into the container.&lt;/li&gt;
&lt;li&gt;Finally, set the container to listen on &lt;strong&gt;port 5000&lt;/strong&gt; and define the startup command: here I use &lt;strong&gt;Gunicorn&lt;/strong&gt; to run the Flask app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The multi-stage approach means our final image (stage 2) does &lt;strong&gt;not&lt;/strong&gt; include all the bulky build tools from stage 1.&lt;br&gt;
We only carry over the compiled binary and needed files. This makes the final Docker image smaller and cleaner, which is great for efficiency and security.&lt;/p&gt;


&lt;h2&gt;
  
  
  Building and Running the Docker Container
&lt;/h2&gt;

&lt;p&gt;With the Dockerfile written, I proceeded to &lt;strong&gt;build the image&lt;/strong&gt; on the EC2 instance:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; myflaskcpp &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Next, I ran the container. This is where the &lt;strong&gt;port configuration&lt;/strong&gt; is important. My Flask app (via Gunicorn) is set to run on port 5000 inside the container. But I want people to access it through the standard HTTP &lt;strong&gt;port 80&lt;/strong&gt; on the EC2 instance. So I used a port mapping:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 80:5000 myflaskcpp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here, &lt;code&gt;-p 80:5000&lt;/code&gt; maps &lt;strong&gt;port 80 on the host (the EC2 instance)&lt;/strong&gt; to &lt;strong&gt;port 5000 inside the container&lt;/strong&gt;. Now, any request hitting the EC2’s public IP on port 80 will be forwarded to the Flask app in the container. I ran the container in detached mode (&lt;code&gt;-d&lt;/code&gt;) so it runs in the background.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Make sure you opened &lt;strong&gt;port 80&lt;/strong&gt; in the EC2 security group. Otherwise, you won’t be able to reach the app from your browser.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Why Use Gunicorn instead of Flask’s Dev Server?
&lt;/h2&gt;

&lt;p&gt;You might wonder why I’m using Gunicorn in the Dockerfile, rather than just running the development server:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The reason is that Flask’s built-in server (the one you get with &lt;code&gt;app.run()&lt;/code&gt;) is &lt;strong&gt;meant for development only&lt;/strong&gt;. It’s single-threaded by default and not optimized for multiple users or production stability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gunicorn&lt;/strong&gt;, on the other hand, is a &lt;strong&gt;production-ready WSGI server&lt;/strong&gt;. It can handle &lt;strong&gt;multiple requests&lt;/strong&gt; at the same time by running several worker processes. This means if one user is making a slow request, other users can still be served in parallel. Gunicorn is also well-tested for deployment, making your app more robust under real-world traffic. In short, using Gunicorn ensures our Flask app will be able to handle more than one person at a time and won’t crash at the first sign of stress.&lt;/p&gt;


&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Now grab the EC2 instance’s &lt;strong&gt;public IP&lt;/strong&gt; and check if it works.&lt;/p&gt;

&lt;p&gt;And that’s it! We have managed to get a &lt;strong&gt;C++ algorithm&lt;/strong&gt; and a &lt;strong&gt;Flask&lt;/strong&gt; web interface deployed on &lt;strong&gt;AWS&lt;/strong&gt; using &lt;strong&gt;Docker&lt;/strong&gt;. As someone new to combining these technologies, it felt great to see it running live.&lt;/p&gt;
&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;strong&gt;EC2 security groups&lt;/strong&gt; to open the ports you need (SSH and HTTP in this case).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker multi-stage builds&lt;/strong&gt; can mix languages (compile in one stage, run in another) to keep things efficient.&lt;/li&gt;
&lt;li&gt;Map your container’s internal ports to the server’s ports so the world can reach your app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gunicorn&lt;/strong&gt; is your friend for serving Flask in production.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy deploying! 🚀&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/vivekvohra" rel="noopener noreferrer"&gt;
        vivekvohra
      &lt;/a&gt; / &lt;a href="https://github.com/vivekvohra/tideman" rel="noopener noreferrer"&gt;
        tideman
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Tideman Electoral System&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;Tideman’s &lt;strong&gt;Ranked Pairs method&lt;/strong&gt; (developed by Nicolaus Tideman) is a &lt;strong&gt;ranked-choice voting algorithm&lt;/strong&gt; that selects a single winner by constructing a directed graph of pairwise victories and locking in the strongest preferences while &lt;strong&gt;avoiding cycles&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This repository provides an &lt;strong&gt;object-oriented, benchmarked, file-driven C++17 implementation&lt;/strong&gt;, wrapped by a &lt;strong&gt;Flask API&lt;/strong&gt; and a &lt;strong&gt;minimal HTML front end&lt;/strong&gt;. It’s now &lt;strong&gt;Dockerized&lt;/strong&gt; (multi-stage) and &lt;strong&gt;EC2-ready&lt;/strong&gt; with &lt;strong&gt;Gunicorn&lt;/strong&gt; for a production-style run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Live demo:&lt;/strong&gt; &lt;a href="http://13.200.175.4/" rel="nofollow noopener noreferrer"&gt;http://13.200.175.4/&lt;/a&gt;&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Features&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CSV input&lt;/strong&gt; with support from thousands up to &lt;strong&gt;millions of ballots&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-resolution benchmarking&lt;/strong&gt; using &lt;code&gt;&amp;lt;chrono&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Object-oriented design&lt;/strong&gt; (&lt;code&gt;TidemanElection&lt;/code&gt;, &lt;code&gt;VoteParser&lt;/code&gt;, &lt;code&gt;Benchmark&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Robust error handling&lt;/strong&gt; (bad input, duplicates, file issues)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficient in practice&lt;/strong&gt;: millions of ballots with ≤9 candidates in milliseconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web service wrapper&lt;/strong&gt;: C++ executable exposed via &lt;strong&gt;Flask API&lt;/strong&gt; + &lt;strong&gt;HTML&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Containerized&lt;/strong&gt;: Multi-stage &lt;strong&gt;Dockerfile&lt;/strong&gt;, &lt;strong&gt;Gunicorn&lt;/strong&gt; server, &lt;strong&gt;EC2-ready&lt;/strong&gt; (port 80 → 5000)&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Sample Input (CSV Ballots)&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;Example…&lt;/p&gt;
&lt;/div&gt;


&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/vivekvohra/tideman" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


</description>
      <category>aws</category>
      <category>cloud</category>
      <category>docker</category>
      <category>linux</category>
    </item>
    <item>
      <title>Detecting Alzheimer’s Disease using a CNN-BiLSTM Architecture</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Sat, 10 May 2025 19:08:23 +0000</pubDate>
      <link>https://dev.to/vivekvohra/detecting-alzheimers-disease-with-eeg-and-deep-learning-3ifh</link>
      <guid>https://dev.to/vivekvohra/detecting-alzheimers-disease-with-eeg-and-deep-learning-3ifh</guid>
      <description>&lt;h2&gt;
  
  
  Abstract
&lt;/h2&gt;

&lt;p&gt;Alzheimer's disease (AD) represents a significant global health challenge. This paper proposes an experimental approach for early AD detection using Electroencephalography (EEG) signals processed through an innovative deep-learning architecture. I suggest a channel-frequency-based attention model that effectively captures spectral features across different brain regions. This model uses depthwise convolutions, squeeze-and-excitation blocks, and spatial dropout regularization to efficiently learn patterns within EEG data. The dataset has 19-channel EEG recordings from subjects with Alzheimer's, healthy controls, and frontotemporal dementia. The model shows 83.81% accuracy, which tells us about its potential.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Alzheimer's disease (AD) is a progressive neurodegenerative disorder that affects millions of people in the world and shows gradual cognitive decline, memory loss, and behavioral changes. Many individuals delay seeking medical help because they attribute memory loss to natural aging, leading to late diagnosis when treatment is too late.&lt;/p&gt;

&lt;p&gt;Current diagnostic procedures for AD often rely on invasive and expensive methods such as Positron Emission Tomography (PET). Thus, we need a non-invasive, cost-effective, and readily accessible tools for early AD detection. Electroencephalography (EEG) is a good candidate because of its non-invasive nature and relatively low cost.&lt;/p&gt;

&lt;p&gt;Several studies have shown that increased power in low-frequency bands like delta and theta, and decreased power in higher bands like alpha and beta, can serve as biomarkers for early AD detection.&lt;/p&gt;

&lt;p&gt;Deep learning models can automatically learn patterns from raw or minimally processed data that might escape traditional analysis methods.&lt;/p&gt;

&lt;p&gt;This paper proposes a DL model for AD detection using EEG signals. Our model captures patterns in EEG data, focusing on relative band power features from five frequency bands: alpha, beta, gamma, delta, and theta. The key contributions of this paper include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A channel-frequency attention model that tries to capture the relationship between different brain regions and frequency bands&lt;/li&gt;
&lt;li&gt;Preprocessing pipeline for extracting relative band power(RBP) features from preprocessed EEG signals&lt;/li&gt;
&lt;li&gt;Analysis of the model's performance.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Deep Learning for EEG Analysis
&lt;/h2&gt;

&lt;p&gt;The application of deep learning to EEG analysis has gained significant traction in recent years, offering the potential to automatically learn relevant features from minimally processed data. Various deep learning architectures have been explored for EEG-based AD detection, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), deep belief networks (DBNs), and, more recently, transformers.&lt;/p&gt;

&lt;p&gt;Zhao (2014) was among the early researchers who applied deep learning to EEG-based AD diagnosis, using a deep auto-encoder network to extract features from time-domain EEG data&lt;a href="http://vigir.missouri.edu/~gdesouza/Research/Conference_CDs/ACCV_2014/pages/workshop3/pdffiles/w3-p7.pdf" rel="noopener noreferrer"&gt;4&lt;/a&gt;. The study demonstrated that deep learning could discriminate between AD patients and healthy controls without requiring manual feature engineering. Building on this work, more recent studies have developed increasingly sophisticated architectures.&lt;/p&gt;

&lt;p&gt;Ieracitano et al. (2019) proposed a CNN model for EEG-based AD detection, achieving high classification accuracy by learning directly from time-frequency representations of EEG signals. Similarly, Huggins et al. (2020) employed an AlexNet-based architecture to classify EEG data transformed into time-frequency graphs using continuous wavelet transform, achieving an impressive accuracy of 98.90% for three-class classification.&lt;/p&gt;

&lt;p&gt;More recently, Wang et al. (2024) introduced LEAD, a large foundation model for EEG-based AD detection. This approach employed contrastive learning on a large corpus of EEG data from various neurological disorders, followed by fine-tuning on AD-specific datasets. The model demonstrated significant improvements over previous methods, highlighting the potential of transfer learning and self-supervised approaches in limited AD-specific data&lt;a href="https://arxiv.org/html/2502.01678v2" rel="noopener noreferrer"&gt;7&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges in EEG-Based AD Detection
&lt;/h2&gt;

&lt;p&gt;Despite promising results, several challenges are faced in EEG-based AD detection. One of the significant challenges is variations observed between different individuals, making it difficult to develop a model that generalizes well to new demographics of subjects. EEG signals are influenced by various factors, including gender, medication, and age, making it challenging to isolate AD-specific patterns.&lt;/p&gt;

&lt;p&gt;Another one of the challenges is the limited availability of high-quality EEG datasets. Most of the dataset involves small numbers of subjects, which limits the generalization of the models trained on it.&lt;/p&gt;

&lt;p&gt;Data quality is also a significant concern, as EEG recordings are susceptible to various artifacts, including eye movements, muscle activity, and environmental noise. Preprocessing the data can mitigate these issues, but might also remove crucial relevant information and introduce biases in the dataset.&lt;/p&gt;

&lt;p&gt;Finally, understanding the specific features or patterns these deep learning models detect remains challenging. This "black box" nature can hinder clinical adoption, as healthcare providers generally prefer diagnostic tools with clear, interpretable rationales.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proposed Model
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Data Acquisition and Preprocessing
&lt;/h2&gt;

&lt;p&gt;Our study utilized EEG data from a dataset containing recordings from subjects diagnosed with Alzheimer's disease (labeled 'A'), frontotemporal dementia (labeled 'F'), and healthy controls (labeled 'C'). The dataset included 19-channel EEG recordings following the standard 10-20 international system for electrode placement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxa8nzld7wqc8z78ro9p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxa8nzld7wqc8z78ro9p.jpg" alt=" " width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig. 1. Flowchart of the EEG preprocessing pipeline. Raw EEGLAB .set files are filtered, epoched, and processed using Welch's method to compute PSD. Relative Band Power (RBP) features are extracted and then standardized before being input to the classification model.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The preprocessing pipeline consisted of several steps designed to extract meaningful features while minimizing artifacts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Loading and Label Mapping&lt;/strong&gt;: We loaded the EEG data using MNE-Python and mapped the diagnostic groups to numeric labels (0 for Alzheimer's, 1 for frontotemporal dementia, and 2 for healthy controls).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Signal Filtering&lt;/strong&gt;: We applied a bandpass filter (0.5-45 Hz) to remove artifacts and retain only the frequency bands relevant to our analysis. This step eliminated power line noise (typically at 50 or 60 Hz) and very low-frequency drifts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Epoching&lt;/strong&gt;: Continuous EEG recordings were segmented into 2-second epochs with a 1-second overlap. This approach allowed us to capture transient neural patterns while generating sufficient samples for model training.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spectral Analysis&lt;/strong&gt;: We computed the power spectral density (PSD) for each epoch using Welch's method, which provides a robust estimate of the frequency content of EEG signals by averaging the periodograms of overlapping segments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relative Band Power Extraction&lt;/strong&gt;: We extracted relative band power (RBP) features for five standard EEG frequency bands:

&lt;ul&gt;
&lt;li&gt;Delta (0.5-4 Hz): Associated with deep sleep and pathological states&lt;/li&gt;
&lt;li&gt;Theta (4-8 Hz): Linked to drowsiness and some pathological conditions&lt;/li&gt;
&lt;li&gt;Alpha (8-13 Hz): Predominant during relaxed wakefulness&lt;/li&gt;
&lt;li&gt;Beta (13-25 Hz): Related to active thinking and focus&lt;/li&gt;
&lt;li&gt;Gamma (25-45 Hz): Associated with cognitive processing and perceptual binding&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft91o1jik7jv7ruw3hdjo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft91o1jik7jv7ruw3hdjo.jpg" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig. 2. Conceptual diagram of Relative Band Power (RBP) feature generation for a single epoch. Power Spectral Density (PSD) is computed using Welch's method. Power is then aggregated and normalized for five key frequency bands (Theta, Delta, Alpha, Beta, Gamma), producing a 19x5 feature map (19 channels x 5 frequency bands) used as input features&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The relative band power was calculated by dividing the absolute power in each frequency band by the total power across all bands, resulting in a normalized measure that reduces the impact of inter-subject variability in overall signal amplitude.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Feature Reshaping&lt;/strong&gt;: The extracted RBP features were reshaped into a 4D tensor (epochs, channels, frequency bands, 1) suitable for input to our convolutional neural network.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Splitting and Standardization&lt;/strong&gt;: The dataset was split into training (80%) and testing (20%) sets, and standardization was applied to normalize the feature distributions, improving training stability and model performance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The final input to our model had the shape (N, 19, 5, 1), representing N epochs, 19 EEG channels, five frequency bands, and one feature (relative band power).&lt;/p&gt;

&lt;h1&gt;
  
  
  Model Architecture
&lt;/h1&gt;

&lt;p&gt;We propose a hybrid deep learning model combining Convolutional Neural Networks (CNNs) and Bidirectional Long Short-Term Memory (BiLSTM) networks for AD detection. The model processes Relative Band Power (RBP) features extracted from 2-second non-overlapping EEG epochs. These features are structured as input tensors of shape (19, 5, 1), representing 19 EEG channels, 5 canonical frequency bands (Delta: 0.5-4Hz, Theta: 4-8Hz, Alpha: 8-13Hz, Beta: 13-25Hz, Gamma: 25-45Hz) [Adjust band definitions if different], and a single feature dimension (power).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4iv553z9z4q3qxw8c05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4iv553z9z4q3qxw8c05.png" alt=" " width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig. 3. Proposed CNN–BiLSTM classification architecture.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The architecture comprises:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;CNN Feature Extractor&lt;/strong&gt;:Processes the (19, 5, 1) input.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Block 1&lt;/strong&gt;: Conv2D (32 filters, 3x3 kernel, L2 reg.) -&amp;gt; BatchNormalization -&amp;gt; ReLU -&amp;gt; MaxPooling2D (pool size (2, 1)), reducing the channel dimension while preserving frequency information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Block 2:&lt;/strong&gt; Conv2D (64 filters, 3x3 kernel, L2 reg.) -&amp;gt; BatchNormalization -&amp;gt; ReLU -&amp;gt; MaxPooling2D (pool size (2, 2)), downsampling both dimensions. This stage captures local spatial patterns across channels and spectral patterns within frequency bands.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sequence Preparation&lt;/strong&gt;: A Permute layer rearranges the CNN output dimensions to prioritize the frequency axis (batch, reduced_freqs, reduced_channels, filters), and A Reshape layer merges the channel and filter dimensions, creating a sequence input for the LSTM: (batch, sequence_length=reduced_freqs, features_per_step).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sequential Modeling (BiLSTM):&lt;/strong&gt; A Bidirectional LSTM layer (64 units, dropout, recurrent dropout, L2 reg.) processes the sequence of features derived from the frequency bands. This captures dependencies and contextual information across the spectral profile (e.g., relationships between alpha and beta band features). return_sequences=False is used for classification.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Classification Head&lt;/strong&gt;: Dropout -&amp;gt; Dense (128 units, ReLU, L2 reg.) -&amp;gt; Dropout -&amp;gt; Dense (3 units, softmax activation) for final class prediction (A, F, C).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The model architecture is summarized in the following diagram:&lt;/p&gt;

&lt;h2&gt;
  
  
  Training Procedure
&lt;/h2&gt;

&lt;p&gt;The model was trained using the Adam optimizer with a learning rate of 0.001 and a sparse categorical cross-entropy loss function. The dataset exhibited moderate class imbalance (Class A: ~42%, F: ~24%, C: ~34% of total epochs). Balanced class weights were computed using sklearn's 'balanced' mode and applied during training to mitigate this.&lt;/p&gt;

&lt;p&gt;Training was regularized using L2 penalties on convolutional and dense layers, dropout in the LSTM and thick layers, and two callbacks: EarlyStopping (monitoring val_loss, patience 20, restoring best weights) and ReduceLROnPlateau (monitoring val_loss, factor 0.2, patience 7). Training ran for up to 100 epochs with a batch size of 128 on a dataset split into 80% training (~55k epochs) and 20% testing (~14k epochs). The final model weights were selected based on the lowest validation loss achieved during training.&lt;/p&gt;

&lt;h2&gt;
  
  
  Evaluation of the Proposed System
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Experimental Setup
&lt;/h2&gt;

&lt;p&gt;We evaluated our model using a rigorous experimental framework to assess its performance in classifying EEG signals from subjects with Alzheimer's, frontotemporal dementia, and healthy controls. The dataset was split into training (80%) and testing (20%) sets. We used accuracy, loss, and confusion matrices as our primary evaluation metrics.&lt;/p&gt;

&lt;p&gt;The model is implemented using TensorFlow and Keras.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results and Performance Analysis
&lt;/h2&gt;

&lt;p&gt;The model achieved a &lt;strong&gt;test accuracy of 83.81%&lt;/strong&gt; and a &lt;strong&gt;Log Loss of 0.4188&lt;/strong&gt;. The &lt;strong&gt;Cohen's Kappa coefficient was 0.7520&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The per-class performance, as detailed in Table I, reveals generally robust results across all classes. Class C ('Control') achieved the highest precision (0.8751) and a high recall (0.8478), leading to the best F1-score (0.8612). Class A ('Alzheimer's') also performed well with balanced precision and recall (0.8407). Class F ('Frontotemporal Dementia') exhibited slightly lower metrics, with an accuracy of 0.7838 and recall of 0.8195 (F1-score: 0.8012).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Table I: Classification Report&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Class&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Precision&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Recall&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;F1-Score&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;Support&lt;/strong&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;A
   &lt;/td&gt;
   &lt;td&gt;0.8407
   &lt;/td&gt;
   &lt;td&gt;0.8407
   &lt;/td&gt;
   &lt;td&gt;0.8407
   &lt;/td&gt;
   &lt;td&gt;5724
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;F
   &lt;/td&gt;
   &lt;td&gt;0.7838
   &lt;/td&gt;
   &lt;td&gt;0.8195
   &lt;/td&gt;
   &lt;td&gt;0.8012
   &lt;/td&gt;
   &lt;td&gt;3335
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;C
   &lt;/td&gt;
   &lt;td&gt;0.8751
   &lt;/td&gt;
   &lt;td&gt;0.8478
   &lt;/td&gt;
   &lt;td&gt;0.8612
   &lt;/td&gt;
   &lt;td&gt;4883
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Accuracy&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
   &lt;/td&gt;
   &lt;td&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;0.8381&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;
&lt;strong&gt;13942&lt;/strong&gt;
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Macro Avg&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;0.8332
   &lt;/td&gt;
   &lt;td&gt;0.8360
   &lt;/td&gt;
   &lt;td&gt;0.8344
   &lt;/td&gt;
   &lt;td&gt;13942
   &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
   &lt;td&gt;
&lt;strong&gt;Weighted Avg&lt;/strong&gt;
   &lt;/td&gt;
   &lt;td&gt;0.8391
   &lt;/td&gt;
   &lt;td&gt;0.8381
   &lt;/td&gt;
   &lt;td&gt;0.8384
   &lt;/td&gt;
   &lt;td&gt;13942
   &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;em&gt;This table summarizes the model's performance across three classes (A, F, C) in terms of precision, recall, and F1-score. The weighted and macro averages provide a holistic view of the model’s overall performance. Accuracy denotes the overall proportion of correctly classified instances.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Further analysis using the Area Under the Receiver Operating Characteristic Curve (AUC) with a One-vs-Rest strategy indicated excellent class separability at the probability level. The AUC scores were &lt;strong&gt;0.9454 for Class A, 0.9558 for Class F, and 0.9625 for Class C&lt;/strong&gt;, with a &lt;strong&gt;Macro Average AUC of 0.9546&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The normalized confusion matrix (Fig. Y - &lt;em&gt;refer to the heatmap figure&lt;/em&gt;) provides insights into specific error patterns. The diagonal elements confirm the high recall values for each class (A: 0.84, F: 0.82, C: 0.85). The most notable misclassifications occurred where &lt;strong&gt;13% of actual Class F instances were predicted as Class A&lt;/strong&gt;, and &lt;strong&gt;10% of actual Class C instances were predicted as Class A&lt;/strong&gt;. Other misclassifications were less frequent (&amp;lt;= 8%). These results suggest that while the model effectively distinguishes the classes overall, there is some residual confusion, particularly in differentiating classes F and C from class A based on the learned spectral patterns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzon06dwxvunx2e9qesvr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzon06dwxvunx2e9qesvr.png" alt=" " width="737" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Fig. 4. Normalized confusion matrix (recall) of the proposed CNN–BiLSTM model across classes A, F, and C..&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Model Interpretability
&lt;/h2&gt;

&lt;p&gt;The model's architecture suggests a tier-wise feature learning process. The CNN layers help to identify local patterns within the channel-frequency RBP representation (e.g., focal slowing, specific band power ratios). The BiLSTM subsequently models how these patterns relate across the frequency spectrum (delta through gamma). The observed high performance, particularly the high AUC scores, suggests the model successfully learned discriminative spectral profile characteristics for each class.&lt;/p&gt;

&lt;h2&gt;
  
  
  Observation
&lt;/h2&gt;

&lt;p&gt;The hybrid CNN-BiLSTM model shows convergence during training: the loss decreases smoothly over epochs, and its accuracy improves until it plateaus, indicating effective learning of the EEG features. Training accuracy is slightly higher than the 83.81% test accuracy, implying good generalization. This suggests limited overfitting.&lt;/p&gt;

&lt;p&gt;The CNN-BiLSTM architecture effectively combines spatial and temporal feature extraction. Convolutional layers learn spatial patterns from EEG band powers, while the BiLSTM captures temporal dynamics. The strong performance metrics (83.81% accuracy, Kappa 0.7520, macro AUC 0.9546) indicate excellent class discrimination.&lt;/p&gt;

&lt;p&gt;Nonetheless, some class confusion remains. The overlaps in EEG spectral features for FTD and controls lead to misclassification. Cohen’s Kappa (0.7520) indicates substantial agreement, and the high AUC (0.9546) shows each class is well separated on average.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The main contribution of this work is a CNN-BiLSTM model that uses EEG spectral features to classify Alzheimer’s Disease, Frontotemporal Dementia, and healthy controls. The model’s high accuracy (83.81%), substantial Kappa (0.7520), and macro AUC (0.9546) demonstrate its effectiveness and potential utility for practical dementia screening. The results suggest that CNN-BiLSTM can successfully capture EEG patterns relevant to these conditions.&lt;/p&gt;

&lt;p&gt;However, challenges remain, particularly class confusion between FTD and controls. This indicates that we need to incorporate more discriminative features. Future directions include adding advanced spectral ratios or connectivity biomarkers and training the model on more diverse and larger datasets.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>tensorflow</category>
    </item>
    <item>
      <title>Fibonacci heaps—a fascinating data structure that cleverly balances lazy operations with efficiency.</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Sun, 16 Mar 2025 13:31:31 +0000</pubDate>
      <link>https://dev.to/vivekvohra/fibonacci-heaps-a-fascinating-data-structure-that-cleverly-balances-lazy-operations-with-1j1a</link>
      <guid>https://dev.to/vivekvohra/fibonacci-heaps-a-fascinating-data-structure-that-cleverly-balances-lazy-operations-with-1j1a</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/vivekvohra" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1422753%2F000149b3-ce47-4349-8a4f-1537fe8d5ada.jpg" alt="vivekvohra"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/vivekvohra/fibonacci-heaps-12ol" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Fibonacci Heaps&lt;/h2&gt;
      &lt;h3&gt;Vivek Vohra ・ Mar 16&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#algorithms&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#computerscience&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#programming&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Fibonacci Heaps—a fascinating data structure that cleverly balances lazy operations with efficiency.</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Sun, 16 Mar 2025 13:31:06 +0000</pubDate>
      <link>https://dev.to/vivekvohra/fibonacci-heaps-a-fascinating-data-structure-that-cleverly-balances-lazy-operations-with-279n</link>
      <guid>https://dev.to/vivekvohra/fibonacci-heaps-a-fascinating-data-structure-that-cleverly-balances-lazy-operations-with-279n</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/vivekvohra" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1422753%2F000149b3-ce47-4349-8a4f-1537fe8d5ada.jpg" alt="vivekvohra"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/vivekvohra/fibonacci-heaps-12ol" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Fibonacci Heaps&lt;/h2&gt;
      &lt;h3&gt;Vivek Vohra ・ Mar 16&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#algorithms&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#computerscience&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#programming&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tutorial&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Fibonacci Heaps</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Sun, 16 Mar 2025 13:28:32 +0000</pubDate>
      <link>https://dev.to/vivekvohra/fibonacci-heaps-12ol</link>
      <guid>https://dev.to/vivekvohra/fibonacci-heaps-12ol</guid>
      <description>&lt;p&gt;This post explores one of computer science's most beautiful data structures—the Fibonacci Heap.&lt;/p&gt;

&lt;p&gt;The Fibonacci Heap is a specialized priority queue data structure consisting of a collection of heap-ordered trees. Each tree satisfies the min-heap property.&lt;/p&gt;

&lt;p&gt;The Fibonacci Heap is a specialized priority queue data structure consisting of a collection of heap-ordered trees. Each tree satisfies the min-heap property, meaning each parent node has a key less than or equal to its children's keys. Let's explore this clever data structure's technical details and inner workings.&lt;/p&gt;

&lt;p&gt;A Fibonacci heap consists of multiple trees stored in a circular doubly-linked list called the &lt;strong&gt;root list&lt;/strong&gt;. Each node within these trees contains pointers to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Its parent node&lt;/li&gt;
&lt;li&gt;One of its children&lt;/li&gt;
&lt;li&gt;Its left and right siblings (due to the circular doubly-linked nature)&lt;/li&gt;
&lt;li&gt;A boolean "marked" flag indicating whether it has lost a child since becoming a child of another node.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This linked structure allows constant-time insertion, deletion, and merging operations by updating pointers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fecoqjdnfh651wd6c68bu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fecoqjdnfh651wd6c68bu.png" alt="Fibonacci root" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Operations:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Insert Operation
&lt;/h3&gt;

&lt;p&gt;The idea is to be lazy; here, we add the number to the root list and then update the pointer to the minimum element.&lt;br&gt;
This operation is performed in &lt;strong&gt;O(1)&lt;/strong&gt; amortized time complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Merge Operation
&lt;/h3&gt;

&lt;p&gt;While merging two Fibonacci, we join both root lists by updating the root node to whichever has the smallest key. And the other root node points to it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02g99hqlpveb6bdg9vhz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F02g99hqlpveb6bdg9vhz.png" alt=" " width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Extract-Min Operation
&lt;/h3&gt;

&lt;p&gt;Extracting the minimum element is more complex and has multiple steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Remove Minimum Node:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove the node pointed to by the minimum pointer from the root list.&lt;/li&gt;
&lt;li&gt;Add all its children directly into the root list as separate trees.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Clean Up:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
We must reduce the number of trees in our root list to maintain efficiency. We do this by repeatedly merging trees with identical degrees:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We use an auxiliary array indexed by node degree.&lt;/li&gt;
&lt;li&gt;We only allow one node to exist per degree and merge if there is more than one.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This ensures that no two trees have identical degrees after Extract-Min completes, keeping node degrees low.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Update Minimum Pointer:&lt;/strong&gt; 
All resulting trees have distinct degrees, just like a Binomial Heap.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywuhkgr6pg7cryrho4w2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fywuhkgr6pg7cryrho4w2.png" alt=" " width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Decrease-Key Operation
&lt;/h3&gt;

&lt;p&gt;Decrease-Key operation reduces a node's key value and adjusts its position within the heap:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decrease the key value directly at its current position.&lt;/li&gt;
&lt;li&gt;If this violates the heap property (the node becomes smaller than its parent), remove that node from its parent and move it to the root list.&lt;/li&gt;
&lt;li&gt;Mark any parent losing one child; if a marked parent loses another child later, cut that parent out recursively as well.&lt;/li&gt;
&lt;li&gt;Update minimum pointer if necessary.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The above technique is known as controlled Node Cuts or Cascading Cuts. It ensures that each node retains a certain minimum number of descendants relative to its degree.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Specifically, for any given node with degree &lt;strong&gt;d&lt;/strong&gt;, the smallest possible subtree rooted at that node contains at least &lt;strong&gt;F_{d+2}&lt;/strong&gt; nodes, where &lt;strong&gt;F_i&lt;/strong&gt; denotes the &lt;strong&gt;i&lt;/strong&gt; th Fibonacci number.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgg3colsxg75mcbg7lf8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgg3colsxg75mcbg7lf8.png" alt=" " width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Fibonacci Numbers Appear:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now consider the smallest possible tree for each degree after allowing cascading cuts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A tree with degree 0 has 1 node.&lt;/li&gt;
&lt;li&gt;A tree with degree 1 also minimally has 2 nodes.&lt;/li&gt;
&lt;li&gt;Each subtree rooted at a child node must have at least certain minimal degrees for higher degrees due to our "one-child-loss" rule.&lt;/li&gt;
&lt;li&gt;It turns out that this minimal subtree size precisely follows the Fibonacci sequence:

&lt;ul&gt;
&lt;li&gt;The size of the smallest possible tree for degree &lt;strong&gt;d&lt;/strong&gt; equals the sum of sizes of two previous smaller-degree minimal trees, which exactly matches how Fibonacci numbers are defined.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjx6x83fsfqyti6i7vjj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjx6x83fsfqyti6i7vjj.png" alt=" " width="800" height="305"&gt;&lt;/a&gt;&lt;br&gt;
For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A degree-0 tree contains &lt;strong&gt;F_2 = 1&lt;/strong&gt; node.&lt;/li&gt;
&lt;li&gt;A degree-1 tree contains &lt;strong&gt;F_3 = 2&lt;/strong&gt; nodes.&lt;/li&gt;
&lt;li&gt;A degree-2 tree contains &lt;strong&gt;F_4 = 3&lt;/strong&gt; nodes.&lt;/li&gt;
&lt;li&gt;A degree-3 tree contains &lt;strong&gt;F_5 = 5&lt;/strong&gt; nodes.&lt;/li&gt;
&lt;li&gt;A degree-4 tree contains &lt;strong&gt;F_6 = 8&lt;/strong&gt; nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In conclusion, Fibonacci Heaps balances laziness (fast Insert and DecreaseKey) with periodic cleanup (ExtractMin consolidations), ensuring optimal amortized complexity for priority queue operations. This design leverages amortization and binomial tree structures to achieve these impressive theoretical guarantees.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Operation&lt;/th&gt;
&lt;th&gt;Fibonacci Heap Complexity&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Insert&lt;/td&gt;
&lt;td&gt;Amortized   O(1)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ExtractMin&lt;/td&gt;
&lt;td&gt;Amortized   O(log n)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DecreaseKey&lt;/td&gt;
&lt;td&gt;Amortized   O(1)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Merge&lt;/td&gt;
&lt;td&gt;Amortized   O(1)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This insightful YouTube video explains The Fibonacci Heap operations in detail: &lt;a href="https://www.youtube.com/watch?v=6JxvKfSV9Ns&amp;amp;t=1561s" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=6JxvKfSV9Ns&amp;amp;t=1561s&lt;/a&gt;&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>computerscience</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Convolution</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Fri, 07 Mar 2025 20:14:09 +0000</pubDate>
      <link>https://dev.to/vivekvohra/convolution-267i</link>
      <guid>https://dev.to/vivekvohra/convolution-267i</guid>
      <description>&lt;h3&gt;
  
  
  CONVOLUTION
&lt;/h3&gt;

&lt;p&gt;CONVOLUTION is a mathematical operator “*”.Graphically, it expresses how the 'shape' of one function is modified by the other.&lt;/p&gt;

&lt;h4&gt;
  
  
  It is applicable only to Linear Time Invariant Systems.
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Time Invariant&lt;/strong&gt;: systems where a shift in input results in an identical shift in output.eg:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklb8gq9lmqnnz5flf4hh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklb8gq9lmqnnz5flf4hh.png" alt=" " width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Linear&lt;/strong&gt;:In simple terms it is a graph between output and input which must be a straight line through the origin without having saturation or dead time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48733gqnelv059yhb65e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48733gqnelv059yhb65e.png" alt=" " width="800" height="196"&gt;&lt;/a&gt;&lt;br&gt;
For discrete systems, the formula is :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrbd8ulp5vlypf9z3dvu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrbd8ulp5vlypf9z3dvu.png" alt=" " width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyb2uw5xhpold5wv4rcjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyb2uw5xhpold5wv4rcjd.png" alt=" " width="800" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://youtu.be/QmcoPYUfbJ8?si=2CCf64_x9fGl7jx5&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In this video, we understand convolution using a great analogy. Suppose we want to find how much smoke burning several matchsticks produce. Here we take a time frame of 5 minutes. Let's define 2 functions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smoke function: S(t)&lt;/strong&gt; - It describes the amount of smoke produced by a single matchstick across time. It might be an exponential decay graph. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3vdllf4tp6wv0bl191m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3vdllf4tp6wv0bl191m.png" alt=" " width="450" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Firework function: F(t)&lt;/strong&gt;- It describes the number of matchsticks lit per minute. &lt;br&gt;
Let it be a linear function. y = x&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;t=0 1 match was lit
&lt;/li&gt;
&lt;li&gt;t=2 2 matches were lit: &lt;strong&gt;total smoke&lt;/strong&gt; = smoke from 2 sticks at present = 2*S(0) (because for matchsticks at this time they just started burning) + smoke of prev. 1 stick after 1 sec. of burning = 1*S(1).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsfxxu30tim88q2mkzm2o.png" alt=" " width="702" height="599"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;at minute 0&lt;/th&gt;
&lt;th&gt;1*S(0)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;at minute 1&lt;/td&gt;
&lt;td&gt;2*S(0)+ 1*S(1)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;at minute 2&lt;/td&gt;
&lt;td&gt;3*S(0) + prev(t + 1)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Thus we can conclude that it indeed follows the above equation.&lt;/p&gt;

&lt;h4&gt;
  
  
  We can visualize convolution as a sliding window:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem6tiixuo9x5tj8e8pfb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem6tiixuo9x5tj8e8pfb.png" alt=" " width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3zc60y8lvopj06kod2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3zc60y8lvopj06kod2c.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiecqe02361zn3p8rszbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiecqe02361zn3p8rszbq.png" alt=" " width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhq8t3fnlbm139wk4p22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhq8t3fnlbm139wk4p22.png" alt=" " width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now if are 1st function is very large as compared to our 2nd function &lt;/p&gt;

&lt;p&gt;Our visualization will be somewhat like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flv8wouzayxyesn3tlwub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flv8wouzayxyesn3tlwub.png" alt=" " width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  In 2-D version(Image processing)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6lcbxu908bk3m0pd8er.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6lcbxu908bk3m0pd8er.png" alt=" " width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;or using Alternate illustration :&lt;/p&gt;

&lt;p&gt;In the 2-D function, we can solve this Matrix convolution at (2,2) by flipping and multiplying.&lt;/p&gt;

&lt;p&gt;i.e. = (i*1)+(h*2)+(g*3)+(f*4)+ (e*5) + (d*6) + (c*7) + (b*8) + (a*9)&lt;/p&gt;

&lt;p&gt;In Image Processing, the  2nd function is known as Kernel which is in this case 3x3 matrix&lt;/p&gt;

&lt;p&gt;[1/9, 1/9, 1/9],&lt;/p&gt;

&lt;p&gt;[1/9, 1/9, 1/9],&lt;/p&gt;

&lt;p&gt;[1/9, 1/9, 1/9]&lt;/p&gt;

&lt;p&gt;We could have solved our Blur function by taking our 2nd function such that it sum of all nodes is 1 and then doing the convolution. (Remember while multiplying we have to flip the Matrix!!).&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>basic</category>
      <category>programming</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Thu, 20 Feb 2025 09:06:15 +0000</pubDate>
      <link>https://dev.to/vivekvohra/-5lg</link>
      <guid>https://dev.to/vivekvohra/-5lg</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/vivekvohra" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F1422753%2F000149b3-ce47-4349-8a4f-1537fe8d5ada.jpg" alt="vivekvohra"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/vivekvohra/part-1-detecting-alzheimers-with-eeg-and-deep-learning-theory-motivation-and-preprocessing-1hd1" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Part 1: Detecting Alzheimer’s with EEG and Deep Learning – Theory, Motivation, and Preprocessing&lt;/h2&gt;
      &lt;h3&gt;Vivek Vohra ・ Feb 20&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#deeplearning&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#machinelearning&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#tensorflow&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>tensorflow</category>
    </item>
    <item>
      <title>Part 1: Detecting Alzheimer’s with EEG and Deep Learning – Theory, Motivation, and Preprocessing</title>
      <dc:creator>Vivek Vohra</dc:creator>
      <pubDate>Thu, 20 Feb 2025 09:00:17 +0000</pubDate>
      <link>https://dev.to/vivekvohra/part-1-detecting-alzheimers-with-eeg-and-deep-learning-theory-motivation-and-preprocessing-1hd1</link>
      <guid>https://dev.to/vivekvohra/part-1-detecting-alzheimers-with-eeg-and-deep-learning-theory-motivation-and-preprocessing-1hd1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Alzheimer’s disease (AD) is a challenging neurodegenerative disorder that affects millions of people worldwide. Many people delay seeking medical help because they believe memory loss is a natural part of growing old. This leads to late diagnosis and fewer treatment options. Traditional diagnostic tools like PET scans, cerebrospinal fluid tests, and MRI are invasive, costly, and not easily accessible.&lt;/p&gt;

&lt;p&gt;As part of my ongoing research efforts, I deployed an experimental prototype that uses an EEG data set from OpenNeuro, combined with machine learning, to explore early detection of Alzheimer’s. Although this work is experimental and will not be used in the final research publication, it has deepened my skills in signal processing, feature extraction, and model development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Alzheimer’s Detection Matters
&lt;/h2&gt;

&lt;p&gt;Early detection of Alzheimer’s can lead to timely intervention, which may slow the progression of the disease and improve the quality of life for patients as this disease is non-curable. Studies have shown that increased theta power, decreased alpha power, and disrupted gamma coherence are often associated with Alzheimer’s. By applying deep learning to these spectral features, we aim to create a tool that could eventually assist clinicians in making early and accurate diagnoses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Theoretical Background: PSD, DSP, and EEG Signals
&lt;/h2&gt;

&lt;p&gt;A core part of this project is extracting power spectral density (PSD) features from EEG signals. PSD analysis reveals how the power of a signal is distributed across different frequencies. Using Welch’s method— an approach that divides the signal into overlapping segments, computes the Fast Fourier Transform (FFT) on each, and averages the results— we obtain a reliable estimate of the PSD.&lt;br&gt;
This process is a fundamental aspect of digital signal processing (DSP) and helps transform raw EEG data into a structured frequency-domain representation that highlights biomarkers related to Alzheimer’s.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Preprocessing Pipeline
&lt;/h2&gt;

&lt;p&gt;Before training the model, raw EEG recordings must be transformed into meaningful features. The dataset used here is from OpenNeuro, which is already extensively preprocessed, providing a clean dataset. Here’s a breakdown of the preprocessing steps implemented:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfzloqr6vc9nq6t06sir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfzloqr6vc9nq6t06sir.png" alt="Pipeline" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Data Loading and Label Mapping
&lt;/h3&gt;

&lt;p&gt;We begin by loading EEG data using MNE-Python and reading participant metadata from a TSV file. The metadata maps diagnostic groups—‘A’ for Alzheimer’s, ‘F’ for Frontotemporal Dementia, and ‘C’ for healthy controls—to numeric labels.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;

&lt;span class="n"&gt;metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Dataset/participants.tsv&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sep&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\t&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;group_mapping&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;A&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;F&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;C&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;# Map diagnostic groups to integers
&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;label&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Group&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;group_mapping&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;subject_labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;participant_id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;label&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This mapping is essential because it links each subject’s EEG data with their clinical result, thus helping us with supervised learning.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. EEG Signal Processing
&lt;/h3&gt;

&lt;p&gt;Although this EEG data was cleaned, it still had several unwanted frequencies. We only need specific frequencies for our analysis, so we apply an FIR filter (0.5–45 Hz) to remove unwanted frequencies (e.g., power line noise).&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fir_design&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;firwin&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then, we segment the continuous data into 2-second epochs with a 1-second overlap. This step captures transient neural patterns relevant to Alzheimer’s.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="n"&gt;epochs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mne&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;make_fixed_length_epochs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;2.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;overlap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;preload&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  3. PSD Calculation and Feature Extraction
&lt;/h3&gt;

&lt;p&gt;We use Welch’s method to compute the PSD for each epoch and then extract relative band power (RBP) features for the standard EEG frequency bands: delta, theta, alpha, beta, and gamma. This step involves averaging the power within each frequency range and normalizing by the total power, resulting in a 4D tensor (epochs, channels, bands, 1) suitable as input for a deep learning model.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;psd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;epochs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compute_psd&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;welch&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fmin&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fmax&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;PSDs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;freqs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;psd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;return_freqs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;freq_bands&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;delta&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;theta&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;alpha&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;beta&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;13&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gamma&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;45&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;band_power&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="nf"&gt;band &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fmin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fmax&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;freq_bands&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;items&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;IDX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;logical_and&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;freqs&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;fmin&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;freqs&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;fmax&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;band_power&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;band&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;psds&lt;/span&gt;&lt;span class="p"&gt;[:,&lt;/span&gt; &lt;span class="p"&gt;:,&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;mean&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;bp_abs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;band_power&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;values&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt; &lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;total_power&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bp_abs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;axis&lt;/span&gt;&lt;span class="o"&gt;=-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;keepdims&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;rbp_relative&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bp_abs&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;total_power&lt;/span&gt;

&lt;span class="n"&gt;features&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rbp_relative&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reshape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rbp_relative&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;rbp_relative&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;rbp_relative&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  4. Label Vector Construction and Data Standardization
&lt;/h3&gt;

&lt;p&gt;Finally, we associate each epoch with its corresponding diagnostic label using the metadata mapping and concatenate all subject features to form the final input matrix &lt;code&gt;X.&lt;/code&gt; We also split our data into training and test sets. To improve training stability, we standardize &lt;code&gt;X&lt;/code&gt; using StandardScaler, but this requires data to be in 2D shape, so we reshape our data, apply the functions, and then reshape it back to the original. &lt;/p&gt;
&lt;h3&gt;
  
  
  5. Final Data Format
&lt;/h3&gt;

&lt;p&gt;After all these steps, if we print our final input matrix, we will feed it into our model, i.e., 'X.'&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print("X shape:", X.shape)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We get the output :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;X shape: (69706, 19, 5, 1)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The given implies :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;69706 Epochs:&lt;/strong&gt;
This is the total number of epochs (or samples) extracted from all subjects. Each epoch represents a 2-second window of EEG data transformed into a feature map.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;19 Channels:&lt;/strong&gt; 
Each epoch's feature map has 19 rows, corresponding to 19 EEG channels.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5 Frequency Bands:&lt;/strong&gt;
The five columns in each feature map represent no. of frequency bands: delta, theta, alpha, beta, and gamma.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1 Channel (Grayscale Image):&lt;/strong&gt;
The final dimension (1) indicates that data is a single channel. This is analogous to a grayscale image. Each pixel value corresponds to a particular EEG channel's normalized relative band power in a specific frequency band.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl88l7p5cebmira7sx5rd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl88l7p5cebmira7sx5rd.png" alt="RBP" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This blog post has covered the theoretical background of power spectral density (PSD) and digital signal processing (DSP) as they relate to EEG signals, explained why EEG is a promising tool for Alzheimer’s detection, and detailed the preprocessing steps that transform raw EEG data into meaningful features for deep learning. Although the model is still experimental, this pipeline lays a strong foundation for my future improvements and learnings.&lt;/p&gt;

&lt;p&gt;In Part 2, we will dive into the details of the model architecture and training strategies, discuss how machine learning components work to learn from these spectral features, and ultimately classify EEG recordings.&lt;/p&gt;



&lt;p&gt;Please visit my GitHub repository, EEG-ML-Experiment, for a detailed look at the code and further updates.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/vivekvohra" rel="noopener noreferrer"&gt;
        vivekvohra
      &lt;/a&gt; / &lt;a href="https://github.com/vivekvohra/EEG-ML-Experiment" rel="noopener noreferrer"&gt;
        EEG-ML-Experiment
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Automated EEG-Based Alzheimer’s Detection System
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;EEG-ML-Experiment&lt;/h1&gt;
&lt;/div&gt;

&lt;p&gt;Welcome to the EEG-ML-Experiment repository! This repository is dedicated to exploring various experimental models for processing EEG data using deep learning techniques. The overall goal is to develop and test different approaches for tasks like Alzheimer’s detection using EEG signals. Although these projects are experimental, they serve as an important learning tool and a foundation for future research and development.&lt;/p&gt;




&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Overview&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;This repository contains multiple experimental models, each implemented in its own subdirectory along with a dedicated README file. The main focus is on leveraging EEG data—specifically, features extracted from power spectral density (PSD) and relative band power—for diagnostic purposes. This work is part of my ongoing research efforts, and while the models are still in development and experimental in nature, they represent a significant learning experience in applying machine learning to biomedical signals.&lt;/p&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Related Blog Post&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;I wrote a blog post detailing the above.&lt;br&gt;
Read the full post…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/vivekvohra/EEG-ML-Experiment" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>machinelearning</category>
      <category>tensorflow</category>
    </item>
  </channel>
</rss>
