<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: ddodxy</title>
    <description>The latest articles on DEV Community by ddodxy (@ridhoajaaa).</description>
    <link>https://dev.to/ridhoajaaa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ridhoajaaa"/>
    <language>en</language>
    <item>
      <title>[Boost]</title>
      <dc:creator>ddodxy</dc:creator>
      <pubDate>Thu, 19 Mar 2026 04:33:27 +0000</pubDate>
      <link>https://dev.to/ridhoajaaa/-25j8</link>
      <guid>https://dev.to/ridhoajaaa/-25j8</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/ridhoajaaa" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3833034%2F44fa15e0-8eb9-4843-a424-a4a7b3538f43.jpeg" alt="ridhoajaaa"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/ridhoajaaa/-how-i-built-an-ai-powered-literature-review-tool-for-thesis-students-5833" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;# How I Built an AI-Powered Literature Review Tool for Thesis Students&lt;/h2&gt;
      &lt;h3&gt;ddodxy ・ Mar 19&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#ai&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#python&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#showdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>webdev</category>
      <category>ai</category>
      <category>python</category>
      <category>showdev</category>
    </item>
    <item>
      <title># How I Built an AI-Powered Literature Review Tool for Thesis Students</title>
      <dc:creator>ddodxy</dc:creator>
      <pubDate>Thu, 19 Mar 2026 03:24:11 +0000</pubDate>
      <link>https://dev.to/ridhoajaaa/-how-i-built-an-ai-powered-literature-review-tool-for-thesis-students-5833</link>
      <guid>https://dev.to/ridhoajaaa/-how-i-built-an-ai-powered-literature-review-tool-for-thesis-students-5833</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;From scraping 3 academic databases to AI summaries — a solo build story&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Problem That Started It All
&lt;/h2&gt;

&lt;p&gt;Every thesis student knows the pain. You sit down with a research topic, open Google Scholar, and spend the next &lt;strong&gt;3-4 hours&lt;/strong&gt; manually:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Searching across Google Scholar, Scopus, and Semantic Scholar separately&lt;/li&gt;
&lt;li&gt;Downloading papers one by one&lt;/li&gt;
&lt;li&gt;Copy-pasting metadata into a spreadsheet&lt;/li&gt;
&lt;li&gt;Repeating this every time your advisor asks for "more references"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was doing exactly this for my own thesis when I thought — &lt;em&gt;this entire workflow is automatable&lt;/em&gt;. So I built &lt;strong&gt;LitAssist&lt;/strong&gt;: a full-stack web app that scrapes journals from 3 sources, processes them through a Python pipeline, and generates AI literature reviews using Gemini.&lt;/p&gt;

&lt;p&gt;Here's everything I learned building it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tech Stack Overview
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Frontend:  Alpine.js + Tailwind CSS (MPA, no build framework)
Backend:   Node.js + Express 5 + Socket.IO
Database:  MongoDB + Mongoose
Scraping:  Puppeteer (Google Scholar) + Semantic Scholar API
AI:        Google Gemini 2.5 Flash
Infra:     Podman + Docker Compose
Tunnel:    ngrok (for public access during dev)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key architectural decision: &lt;strong&gt;hybrid Node.js + Python pipeline&lt;/strong&gt;. Node handles browser automation and the web server. Python handles data cleaning, deduplication, and classification. Each tool does what it's best at.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture Deep Dive
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User clicks "Start Scrape"
        │
        ▼
  Socket.IO event → scraper/index.js (Node.js)
        │
        ├── Google Scholar (Puppeteer + Chromium)
        ├── Scopus (Semantic Scholar API)  
        └── Semantic Scholar API
        │
        ▼
  jurnal_mentah.json (raw data)
        │
        ▼
  processor/main.py (Python + Pandas)
  ├── Clean &amp;amp; normalize
  ├── Detect duplicates
  ├── Classify categories
  └── Calculate relevance scores
        │
        ▼
  MongoDB (via insertMany bulk)
        │
        ▼
  Dashboard updates via Socket.IO
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why Socket.IO for Real-Time Updates?
&lt;/h3&gt;

&lt;p&gt;The scraping process takes 1-5 minutes depending on target count and whether Google Scholar triggers CAPTCHA. A regular HTTP request would timeout. Socket.IO lets me stream progress updates to the frontend in real-time — the user sees exactly which source is being scraped and how many results are coming in.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hardest Problem: CAPTCHA
&lt;/h2&gt;

&lt;p&gt;Google Scholar aggressively uses CAPTCHA to block bots. Most scraping tools either fail silently or get permanently IP-banned.&lt;/p&gt;

&lt;p&gt;My solution: &lt;strong&gt;noVNC + xvfb + x11vnc&lt;/strong&gt; running inside the container.&lt;/p&gt;

&lt;p&gt;When Google Scholar serves a CAPTCHA:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The scraper detects it and pauses&lt;/li&gt;
&lt;li&gt;Sends a Socket.IO event to the frontend&lt;/li&gt;
&lt;li&gt;Opens an embedded noVNC panel in the dashboard&lt;/li&gt;
&lt;li&gt;User solves the CAPTCHA visually, directly in the browser&lt;/li&gt;
&lt;li&gt;Scraper resumes automatically&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is the difference between a tool that works once and a tool that works in production.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Detect CAPTCHA and notify client&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isCaptcha&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;form#captcha-form&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isCaptcha&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;io&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;socketId&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;emit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;captcha_required&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
    &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;CAPTCHA detected. Please solve it in the panel below.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; 
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="c1"&gt;// Wait for user to solve&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;waitForCaptchaResolved&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;page&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;socketId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Freemium Model in Practice
&lt;/h2&gt;

&lt;p&gt;LitAssist has three roles: &lt;strong&gt;Free&lt;/strong&gt;, &lt;strong&gt;Premium&lt;/strong&gt;, and &lt;strong&gt;Admin&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Free&lt;/th&gt;
&lt;th&gt;Premium&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Lifetime quota&lt;/td&gt;
&lt;td&gt;10 journals&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scrapes/day&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Unlimited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max target per scrape&lt;/td&gt;
&lt;td&gt;25&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI Summary&lt;/td&gt;
&lt;td&gt;✗&lt;/td&gt;
&lt;td&gt;✓&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Queue priority&lt;/td&gt;
&lt;td&gt;Standard&lt;/td&gt;
&lt;td&gt;Priority&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Implementing this was straightforward with MongoDB user documents and middleware:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Quota check middleware&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;checkQuota&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;next&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;role&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;free&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;quotaUsed&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;403&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; 
      &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Quota exhausted. Upgrade to Premium.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; 
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Security Hardening
&lt;/h2&gt;

&lt;p&gt;After building the core features, I ran a full security audit:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OWASP ZAP&lt;/strong&gt; baseline scan → fixed CSP headers, removed CDN wildcards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nikto&lt;/strong&gt; web server scan → disabled ETag inode leaks, removed X-Powered-By&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Trivy&lt;/strong&gt; dependency scan → 0 CVEs in npm packages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;npm audit&lt;/strong&gt; → 0 vulnerabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helmet.js&lt;/strong&gt; → full security header suite&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;express-rate-limit&lt;/strong&gt; → rate limiting on auth endpoints (verified: 99.98% blocked in load test)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ZAP final score: &lt;strong&gt;0 FAIL, 7 WARN&lt;/strong&gt; (all remaining warnings are CDN trade-offs or false positives).&lt;/p&gt;




&lt;h2&gt;
  
  
  Performance Results (Lighthouse Mobile)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Category&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Performance&lt;/td&gt;
&lt;td&gt;87&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Accessibility&lt;/td&gt;
&lt;td&gt;93&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best Practices&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SEO&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Key optimizations that moved the needle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Migrated from &lt;strong&gt;Tailwind Play CDN → PostCSS build&lt;/strong&gt; (400KB → 13KB CSS)&lt;/li&gt;
&lt;li&gt;Switched Alpine.js from CDN to &lt;strong&gt;local vendor file&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Added &lt;strong&gt;gzip compression&lt;/strong&gt; via &lt;code&gt;compression&lt;/code&gt; middleware&lt;/li&gt;
&lt;li&gt;Implemented &lt;strong&gt;font-display: swap&lt;/strong&gt; for Google Fonts&lt;/li&gt;
&lt;li&gt;Added proper &lt;strong&gt;cache headers&lt;/strong&gt; (&lt;code&gt;immutable&lt;/code&gt; for assets, 1hr for HTML)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Load Testing (k6)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Scenario 1: 100 concurrent users, static pages
→ 7,493 requests | 0% error | 6.78ms avg response

Scenario 2: 10 concurrent users, full user journey  
→ 1,665 requests | 0% error | 6ms avg response

Scenario 3: Rate limiter stress test (25 VUs hammering login)
→ 161,004 requests | 99.98% blocked after limit | 0 server crashes

Scenario 4: API stress test (20 VUs, all endpoints)
→ 3,668 requests | 0% error | 4ms avg response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The server handles real load with sub-10ms response times. The rate limiter successfully blocks brute force attempts without crashing.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Would Do Differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Start with a build pipeline for CSS.&lt;/strong&gt; Using Tailwind Play CDN for development is fine, but I had to migrate everything to PostCSS later. Should have set this up from day one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Plan the quota system early.&lt;/strong&gt; Adding freemium logic after the core was built required touching a lot of files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Use TypeScript.&lt;/strong&gt; The scraper logic is complex enough that TypeScript would have caught several bugs early.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Separate the scraper into a microservice.&lt;/strong&gt; Right now it runs in the same process as the web server. Under heavy load, a long-running scrape job could block other requests.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Deploy to production (VPS with proper RAM for Chromium)&lt;/li&gt;
&lt;li&gt;[ ] Add Zotero integration for direct export&lt;/li&gt;
&lt;li&gt;[ ] Support more databases (PubMed, IEEE Xplore)&lt;/li&gt;
&lt;li&gt;[ ] Batch processing for multiple topics&lt;/li&gt;
&lt;li&gt;[ ] Mobile app wrapper&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try It / Source Code
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/ridhoajaaa/Litassist-Public" rel="noopener noreferrer"&gt;github.com/ridhoajaaa/LitAssist-Public&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you're a thesis student who wants to automate your literature review process, feel free to try LitAssist. If you're a developer interested in the architecture, the full source is on GitHub.&lt;/p&gt;

&lt;p&gt;Questions? Drop them in the comments — happy to go deeper on any part of the stack.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with Node.js, Python, Alpine.js, Tailwind CSS, MongoDB, Socket.IO, and Puppeteer.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Tags: #nodejs #python #webdev #showdev #opensource&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>python</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
