<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sidali Assoul</title>
    <description>The latest articles on DEV Community by Sidali Assoul (@stormsidali2001).</description>
    <link>https://dev.to/stormsidali2001</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/stormsidali2001"/>
    <language>en</language>
    <item>
      <title>The Solution to Deploying Resume Projects Without Bearing the Cost</title>
      <dc:creator>Sidali Assoul</dc:creator>
      <pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate>
      <link>https://dev.to/stormsidali2001/the-solution-to-deploying-resume-projects-without-bearing-the-cost-4abn</link>
      <guid>https://dev.to/stormsidali2001/the-solution-to-deploying-resume-projects-without-bearing-the-cost-4abn</guid>
      <description>&lt;p&gt;When submitting proposals on Upwork or job applications, I noticed something interesting.&lt;/p&gt;

&lt;p&gt;Clients require adding your project links to the resume or the cover letter.&lt;/p&gt;

&lt;p&gt;Well, that works for simple projects, but my graduation project was literally a microservices-based app with 8 nodes, including 4 inference BERT models running on a "TensorFlow Serving" Docker container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsidaliassoul.com%2Fcomponent_diagram%2520%281%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fsidaliassoul.com%2Fcomponent_diagram%2520%281%29.png" alt="component_diagram (1).png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have already written a full blog article detailing everything about it &lt;a href="https://dev.to/blog/reflections-on-my-engineering-and-masters-thesis-building-an-ai-powered-imrad-analysis-saas-platform/"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I thought in my head, "I'm not spending money or taking the risk of getting DDoS attacks from someone just for a resume project.&lt;/p&gt;

&lt;p&gt;That's why I decided to take a middle ground approach and mock the microservices calls and registry services at the Next.js API level.&lt;/p&gt;

&lt;p&gt;So instead of deploying all of the microservices nodes, I just deployed the Next.js app on Vercel and mocked all the microservices calls.&lt;/p&gt;

&lt;p&gt;As the Next.js API was the main orchestrator and the API composition service, I had to mock all the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The local Postgres database connection.&lt;/li&gt;
&lt;li&gt;Microservice instances lock up at the Eureka Discovery Server: service name to a list of IP addresses and ports.&lt;/li&gt;
&lt;li&gt;External API calls: Resend and Stripe.&lt;/li&gt;
&lt;li&gt;PDF extractor FastAPI microservice calls: This one was relatively simple, as it just receives a scientific paper, parses it, and then extracts the introduction section via some deterministic rules.&lt;/li&gt;
&lt;li&gt;AI IMRad moves and sub-moves FastAPI microservice calls. Mocking this one was also relatively easy, as I had only to mock a single function that takes a bunch of introduction sentences as an input and outputs the classification predictions.&lt;/li&gt;
&lt;li&gt;The User Data microservice is responsible for storing introductions, predictions, and user feedback in a MongoDB database. This one was a bit tricky because I had to run my seeders in non-preview mode, with all the microservices running locally on my machine. Then, I logged in as an admin, downloaded the feedback as JSON, placed it in the public directory of my Next.js app, and used it to mock the data stored in the User Data microservice.&lt;/li&gt;
&lt;li&gt;Authentication and authorization: Well, obviously this one took me a bit of time, as I was using NextAuth, so I had to create a wrapper around the session calls and then return a mock session of three test users: "Premium User," "Admin," and "Visitor." (somebody who is not authenticated).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In Preview mode, when a recruiter tries to log in, he will see this page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc637g2bycnroah81iegy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc637g2bycnroah81iegy.png" alt="Screenshot 2026-05-04 at 10.23.16 AM.png" width="800" height="701"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instead of being asked to enter his credentials, he would need to just select a role among the three listed ones, then he will get logged in instantly.&lt;/p&gt;

&lt;p&gt;After logging in, I added this banner to emphasize again that he is in preview mode and to allow him to easily switch to another role in case he wanted to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt5qnaph8atdppqdsvvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqt5qnaph8atdppqdsvvb.png" alt="Screenshot 2026-05-04 at 10.26.34 AM.png" width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to explore the project preview, you can find it &lt;a href="https://graduation-imrad-introduction-analy.vercel.app/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I'm curious to know whether you're deploying all of your resume projects and then putting their links on your resume. Please let me know in the comments section!&lt;/p&gt;

</description>
      <category>career</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Reflections on My Engineering and Master's Thesis: Building an AI-Powered IMRaD Analysis SaaS Platform</title>
      <dc:creator>Sidali Assoul</dc:creator>
      <pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate>
      <link>https://dev.to/stormsidali2001/reflections-on-my-engineering-and-masters-thesis-building-an-ai-powered-imrad-analysis-saas-bh0</link>
      <guid>https://dev.to/stormsidali2001/reflections-on-my-engineering-and-masters-thesis-building-an-ai-powered-imrad-analysis-saas-bh0</guid>
      <description>&lt;h2&gt;
  
  
  How did everything start?
&lt;/h2&gt;

&lt;p&gt;It's March 2024, and I've totally lost track of time, as I've been fully immersed in a long-term freelancing mission with an interesting Upwork client.&lt;/p&gt;

&lt;p&gt;Then, I was soon to find out that I was running out of time! I needed to start working on my graduation project.&lt;/p&gt;

&lt;p&gt;But wait, I haven't even chosen a theme. No, but worse, I haven't even chosen a suitable supervisor for it.&lt;/p&gt;

&lt;p&gt;So just after finishing my part-time freelancing hours, I started leafing through the themes list, looking for an interesting one that could capture my curiosity and satisfy my knowledge hunger (I was craving NLP, btw).&lt;/p&gt;

&lt;p&gt;After a few hours of research and contemplation, I stumbled upon an interesting project theme entitled " &lt;strong&gt;Writing support system: IMRD text analysis&lt;/strong&gt;."&lt;/p&gt;

&lt;p&gt;I thought to myself, "A 'writing support system' implies that the project might involve some software design, which is something I've always loved, and 'text analysis' means finally having a chance to fulfill my craving for NLP with a project that actually looks interesting."&lt;/p&gt;

&lt;p&gt;The project was at a laboratory in the university where I've been studying, ESI-SBA, known as the "Higher School of Computer Science Sidi Bel Abbès." I have a lot of nice memories about the supervisor as well, as he taught me two modules in the past: File Systems and Data Structures in my second year and Data Analysis in my fourth year. So, going with that supervisor was just a no-brainer for me.&lt;/p&gt;

&lt;p&gt;I looked up his teaching schedule and found out he was teaching the 4th-year students at 9:00 AM, so I asked a friend to let him know I was genuinely interested in the theme he suggested and that I wanted to meet him to discuss my application for the project.&lt;/p&gt;

&lt;p&gt;As soon as he got out of class, I thanked my friend and approached the supervisor to discuss the details. My request was met with warm approval; he explained the problem briefly and sent me some PDFs to dive deeper into it.&lt;/p&gt;

&lt;p&gt;I immediately went to the university reading room and started skimming through the documents the supervisor sent, full of eagerness and joy, completely losing myself in a flow state. Finally, "I'm going to work on a serious NLP research project!"&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Not one graduation project, but actually two!&lt;/li&gt;
&lt;li&gt;But what does IMRAD actually mean?&lt;/li&gt;
&lt;li&gt;
Master's Thesis

&lt;ul&gt;
&lt;li&gt;State of the Art&lt;/li&gt;
&lt;li&gt;Experimentation Pipeline&lt;/li&gt;
&lt;li&gt;Contributions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Engineering Thesis

&lt;ul&gt;
&lt;li&gt;Research Contributions&lt;/li&gt;
&lt;li&gt;Engineering Contributions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Dataset Preparation&lt;/li&gt;

&lt;li&gt;Phase 1: Baseline Model (V1)&lt;/li&gt;

&lt;li&gt;Phase 2: Model Refinement (V2)&lt;/li&gt;

&lt;li&gt;Phase 3: Model Enhancement and Dataset Augmentation (V3)&lt;/li&gt;

&lt;li&gt;

A Microservices-Based SaaS Platform

&lt;ul&gt;
&lt;li&gt;Microservices Breakdown&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Data Models

&lt;ul&gt;
&lt;li&gt;Next.js Microservice&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Not one graduation project, but actually two!
&lt;/h2&gt;

&lt;p&gt;In Algeria, we don't just pick one; we’re required to complete both a master's thesis and an engineering project at the same time.&lt;/p&gt;

&lt;p&gt;Both my master's and engineering theses are about IMRAD classification, but each one focuses on a different angle of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  But what does IMRAD actually mean?
&lt;/h2&gt;

&lt;p&gt;Before we dive any further, there's something important that you need to be aware of.&lt;/p&gt;

&lt;p&gt;Research papers are the medium that is used to communicate scientific findings between researchers.&lt;/p&gt;

&lt;p&gt;Subjectivity has no place in science, so we can't mix scientific findings with other kinds of literature. And that's where IMRad is useful.&lt;/p&gt;

&lt;p&gt;IMRad stands for introduction, methods, results, and discussion. It's a logical and objective framework for writing scientific papers.&lt;/p&gt;

&lt;p&gt;In the Introduction section, the author should present what's known about the research area and all the existing work that's been done on it, show the motivation behind his own study, and then conclude by showing how his research brings a unique perspective to the field.&lt;/p&gt;

&lt;p&gt;In the methods section, the researcher simply tries to make his research reproducible by describing the exact steps and materials he used to create his experiment.&lt;/p&gt;

&lt;p&gt;You can think of it as a comprehensive guide to recreate the experiment with the same steps and in the same conditions.&lt;/p&gt;

&lt;p&gt;Naturally when talking about something, we tend to both describe it and explain it in detail, all in parallel.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I guess that all developers were victim of this at some stage of their carriers, we like explaining the technical details of our project before even finishing describing the business side of it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Well, mixing the two together makes it difficult for the listener or the paper audience to understand both the results and your interpretation.&lt;/p&gt;

&lt;p&gt;We definitely need to pause a little bit between the two sections and separate the two concerns, "describing" and "explaining," to give the audience enough time to process and comprehend everything incrementally.&lt;/p&gt;

&lt;p&gt;In the IMRaD framework, the author is required to first describe the &lt;strong&gt;results&lt;/strong&gt; and data in an objective way with normal text, tables, figures, equations, code, and so on.&lt;/p&gt;

&lt;p&gt;And then and only then does it interpret and explain the results in the &lt;strong&gt;discussion&lt;/strong&gt; section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Master's Thesis
&lt;/h2&gt;

&lt;p&gt;Think about this: there are millions of researchers around the world, which means that scientific papers are published daily at crazy rates.&lt;/p&gt;

&lt;p&gt;Ok, now imagine that you're a researcher sifting through all that vast sea of knowledge; it won't be an effortless task, right?&lt;/p&gt;

&lt;p&gt;To solve this problem, we need something that tells us which section of the text is a discussion and which one is a result.&lt;/p&gt;

&lt;p&gt;We need a classifier that is trained on text data that is annotated with the four types of IMRaD sections.&lt;/p&gt;

&lt;p&gt;This latter can be used internally by search engines to index the papers and provide features such as searching, filtering, and querying literature based on a specific IMRaD section.&lt;/p&gt;

&lt;p&gt;Well, that's what I studied and even implemented in my master's thesis entitled &lt;strong&gt;"Leveraging BERT and Data Augmentation for Robust Classification of IMRAD Sections in Research Papers."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After sifting through that vast sea of knowledge myself, reading a ton of research papers, I stumbled upon an interesting one that included a link to a publicly available dataset.&lt;/p&gt;

&lt;p&gt;And to my surprise, it included a bunch of paragraphs extracted from various research papers with their corresponding IMRaD annotation and an extra Related Work label.&lt;/p&gt;

&lt;p&gt;So I got the following hypothesis or research question:&lt;/p&gt;

&lt;p&gt;Can a BERT model, enhanced with a custom pipeline of data processing, LLM-based outlier detection, and data augmentation, achieve a high accuracy in the classification of IMRaD sections of scientific papers and outperform traditional machine learning approaches?&lt;/p&gt;

&lt;h3&gt;
  
  
  State of the Art
&lt;/h3&gt;

&lt;p&gt;In a master's thesis you need to look into the current research before you can get started with your own approach and see what the state of the art is.&lt;/p&gt;

&lt;p&gt;Therefore, I spent a lot of time looking for relevant papers, reading, filtering, and carefully choosing four key studies that I read and compared based on data size, annotation method, models used, and accuracy.&lt;/p&gt;

&lt;p&gt;I also studied a variety of relevant topics, including classic NLP techniques; traditional machine learning classifiers such as TF-IDF, BoW, Word2Vec, Naive Bayes, and SVM; and modern deep learning advancements such as pre-trained models (BERT and GPT), LLMs, and prompt engineering.&lt;/p&gt;

&lt;h3&gt;
  
  
  Experimentation Pipeline And Contributions
&lt;/h3&gt;

&lt;p&gt;The experimentation section is normally optional in a master's thesis, but I decided to add it anyway.&lt;/p&gt;

&lt;p&gt;Here’s a clear summary of the end-to-end experimental pipeline I built:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data collection from Hugging Face (~530k rows), then exploration, cleaning (removing non-natural language elements), and balancing to ~25k rows&lt;/li&gt;
&lt;li&gt;Traditional baseline (Logistic Regression + TF-IDF) accuracy: 63.78%&lt;/li&gt;
&lt;li&gt;Initial BERT fine-tuning at F1-score 0.7309&lt;/li&gt;
&lt;li&gt;LLM-based outlier detection/cleaning + data augmentation (generating synthetic paragraphs based on similarity with the existing ones to reach ~100k rows)&lt;/li&gt;
&lt;li&gt;Final fine-tuned BERT model with an F1-score of 0.9172 showing massive progressive gains over baseline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The execution of the pipeline resulted in the creation of a newly annotated dataset (approx. 100k rows) and the training of a highly accurate (99.2% F1 scores) fine-tuned BERT classifier, both published on Hugging Face.&lt;/p&gt;

&lt;h2&gt;
  
  
  Engineering Thesis
&lt;/h2&gt;

&lt;p&gt;For my graduation thesis, “Leveraging Gemini Pro and BERT for Automated IMRaD Classification: A Novel Dataset and SaaS Platform," I developed a project that covers two main domains: &lt;strong&gt;NLP deep learning research&lt;/strong&gt; and &lt;strong&gt;systems engineering&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I know. Yeah. It would have been much easier to just keep on with the same topic as my master's thesis and do one study, research, and engineering study.&lt;/p&gt;

&lt;p&gt;But I’m not the kind to give up on something.&lt;/p&gt;

&lt;p&gt;My supervisor gave me the task of analyzing introductions, and no matter the difficulties I had due to the absence of a proper dataset, I wasn't going to let the topic slide just like that.&lt;/p&gt;

&lt;p&gt;Before I get into the nitty gritty of the research and development phases later in this post, here is a quick summary of my main contributions to the research and engineering parts of the project:&lt;/p&gt;

&lt;h3&gt;
  
  
  Research Contributions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Created a Gemini Pro data pipeline that included baseline generation, prompt refinement, outlier detection, and data augmentation.&lt;/li&gt;
&lt;li&gt;Used Random Forest, Logistic Regression, SVM, KNN, and Naive Bayes classifiers to validate the outputs incrementally.&lt;/li&gt;
&lt;li&gt;Created a custom 169,000 sentence dataset and fine-tuned four hierarchical BERT models to boost accuracy from a baseline of 44.61% to a surprisingly high peak F1 score of 98.21%.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Engineering Contributions
&lt;/h3&gt;

&lt;p&gt;Basically, in the engineering part, I took the outputs of the research part, which are the four hierarchical BERT classifiers, and then built a whole microservices-based SaaS platform on top of them.&lt;/p&gt;

&lt;p&gt;The SaaS platform is broken down over the network into 8 nodes:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Nodes&lt;/th&gt;
&lt;th&gt;Technologies&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Gateway&lt;/td&gt;
&lt;td&gt;Nginx&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Service Discovery&lt;/td&gt;
&lt;td&gt;Spring Cloud Eureka&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Frontend + API&lt;/td&gt;
&lt;td&gt;Next.js, Postgres, Stripe API&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PDF Extraction&lt;/td&gt;
&lt;td&gt;FastAPI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;BERT Models&lt;/td&gt;
&lt;td&gt;TensorFlow Serving Container&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AI Moves &amp;amp; Sub Moves&lt;/td&gt;
&lt;td&gt;FastAPI, LangChain, Gemini Pro&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;User Data Service&lt;/td&gt;
&lt;td&gt;Express.js, MongoDB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Message Broker&lt;/td&gt;
&lt;td&gt;Redis&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I'd also like to add that I've followed the &lt;strong&gt;SDLC&lt;/strong&gt; , or software development life cycle, and written comprehensive documentation, starting from requirements analysis and specification to system design, implementation, and testing, to the delivery of the platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dataset Preparation
&lt;/h2&gt;

&lt;p&gt;Before I could begin the research and development of my engineering project, I encountered a hurdle that almost made me give up on the theme.&lt;/p&gt;

&lt;p&gt;I was in doubt and perplexity when I found that there are no research papers or public datasets that provide sentence-level labels for IMRaD introduction moves and sub-moves&lt;a href="http://sub-moves.My" rel="noopener noreferrer"&gt;.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My master's thesis went a bit smoother because I found a lot of research papers and publicly annotated datasets that provide "I," "M," "R," and "D" labels for paragraphs extracted from different papers. One of them is the " &lt;strong&gt;unarXive"&lt;/strong&gt; dataset.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unarxive Dataset]
      (Labels: I,M,R,D,W)
               |
               | Filter &amp;amp;
               | Cleaning
               v
       [Filtered Results]

+-----------------------+---------+
| Text (Excerpt) | Label |
+-----------------------+---------+
| "Food recom..." | I |
| "Previous stu..." | R |
| "As discussed..." | D |
| "Initially..." | I |
| "The method..." | M |
| "Furthermore..." | R |
| "Results show..." | D |
+-----------------------+---------+

         CHALLENGES:
* Section-level annotations only.
* Lack of sentence-level
  granularity.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since I couldn't make use of the "Unarxive" dataset directly, the only solution for my problem was either paying for human annotators or using the closest form of intelligence that science could achieve, "LLMs," or large language models.&lt;/p&gt;

&lt;p&gt;Fortunately, that option was cheap at the time because the free tier for Gemini Pro was so generous, and I also used all the $150 free credit on my Google Cloud account.&lt;/p&gt;

&lt;p&gt;As I could neither obtain public data nor afford a human annotator, I designed a &lt;a href="https://github.com/stormsidali2001/graduation_IMRAD_introduction_analysis_SaaS/tree/main/notebooks" rel="noopener noreferrer"&gt;three-phase approach&lt;/a&gt; (V1→V2→V3) to &lt;strong&gt;overcome&lt;/strong&gt; the lack of a sentence-level granular dataset and to create a custom one tailored to my specific needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 1: Baseline Model (&lt;a href="https://github.com/stormsidali2001/graduation_IMRAD_introduction_analysis_SaaS/tree/main/notebooks/v1" rel="noopener noreferrer"&gt;V1&lt;/a&gt;)
&lt;/h2&gt;

&lt;p&gt;My solution to this problem was incremental.&lt;/p&gt;

&lt;p&gt;I started with a baseline model using the following workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I've chosen random introductions from the cleaned and filtered “unarXive” dataset.&lt;/li&gt;
&lt;li&gt;I broke the introductions into sentences.&lt;/li&gt;
&lt;li&gt;I fed every individual sentence into a Gemini LLM instance while guiding it with a well-crafted classification prompt.&lt;/li&gt;
&lt;li&gt;I then used the generated data to finetune a BERT model for classification.&lt;/li&gt;
&lt;li&gt;This model achieved an accuracy of 44.61%, which has, in my opinion, shown potential (because the accuracy was a little higher than a random guess for three classes).
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[unarxive dataset]
  ↳ Cleaned &amp;amp; filtered
          |
          v
  [Introductions]
          |
       (split)
          v
    [Sentences]
          |
          v
   [Gemini LLMs] &amp;lt;--- [Classification Prompt]
          |
      (outputs)
          v
   [Predictions]
    ↳ Sentence -&amp;gt; Move 3
    ↳ Sentence -&amp;gt; Move 1
    ↳ Sentence -&amp;gt; Move 2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;LLMs by design are generalists. They are intended to handle any sort of situation. So how do we get them to do a particular task like classification?&lt;/p&gt;

&lt;p&gt;That’s where prompt engineering comes in. A prompt is essentially a set of instructions written in raw natural text.&lt;/p&gt;

&lt;p&gt;This allows us to limit the LLM's behavior, or specialize it for a given task, like classification, summarization, etc.&lt;/p&gt;

&lt;p&gt;In this version, I used a relatively simple prompt to tell each Gemini Pro instance to classify a given sentence into its corresponding IMRaD move.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 2: Model Refinement (&lt;a href="https://github.com/stormsidali2001/graduation_IMRAD_introduction_analysis_SaaS/tree/main/notebooks/v2" rel="noopener noreferrer"&gt;V2&lt;/a&gt;)
&lt;/h2&gt;

&lt;p&gt;In the second version of the pipeline, I decided to focus on enhancing the prompt.&lt;/p&gt;

&lt;p&gt;I thought to myself, 'What if the reason behind that low accuracy is that Gemini Pro &lt;strong&gt;didn't have enough context about the introduction&lt;/strong&gt;? Maybe I should &lt;strong&gt;embed the whole introduction in the prompt&lt;/strong&gt;.'&lt;/p&gt;

&lt;p&gt;In addition to that, instead of classifying only the moves, the enhanced prompt would allow Gemini to &lt;strong&gt;classify both moves and submoves in one go&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The comparative table below explains the steps that were taken.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Initial Prompts&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Enhanced Prompts&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Input&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A single sentence.&lt;/td&gt;
&lt;td&gt;A single introduction.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Classification Scope&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Classifies only moves.&lt;/td&gt;
&lt;td&gt;Classifies both moves and submoves in one go.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Output&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;A single-sentence prediction.&lt;/td&gt;
&lt;td&gt;A list of sentence predictions (with move and submove labels).&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These last changes were made to increase overall annotation speed and to give Gemini Pro a better understanding of every move by knowing the corresponding sub moves.&lt;/p&gt;

&lt;p&gt;To test the quality of the new dataset, I used simple ML classifiers with TF-IDF vectorization to avoid time and resource consumption of using a heavy model like BERT for simple benchmarking. The results were a surprise!&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Classifier&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Accuracy&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Random Forest&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;60.7 %&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Logistic Regression&lt;/td&gt;
&lt;td&gt;60.0 %&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Neural Network (Keras)&lt;/td&gt;
&lt;td&gt;57.5 %&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Naive Bayes&lt;/td&gt;
&lt;td&gt;55.6 %&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;K-Nearest Neighbors&lt;/td&gt;
&lt;td&gt;54.0 %&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Decision Tree&lt;/td&gt;
&lt;td&gt;51.6 %&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every single one of them managed to outperform the previous BERT model. That was a clear sign that I was making progress; the prompt enhancement was fruitful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 3: Model Enhancement and Dataset Augmentation (&lt;a href="https://github.com/stormsidali2001/graduation_IMRAD_introduction_analysis_SaaS/tree/main/notebooks/v3" rel="noopener noreferrer"&gt;V3&lt;/a&gt;)
&lt;/h2&gt;

&lt;p&gt;For my final model, I built a custom dataset of 169k sentences using a custom pipeline which sat on top of the V2 generated data, with Gemini based outlier detection, Gemini data augmentation, and fine-tuning 4 custom BERT models.&lt;/p&gt;

&lt;p&gt;The final generated data was used to fine-tune four hierarchical, specialized BERT models, all published on Hugging Face: &lt;a href="https://huggingface.co/stormsidali2001/IMRAD_introduction_moves_classifier" rel="noopener noreferrer"&gt;Move Classifier&lt;/a&gt;, &lt;a href="https://huggingface.co/stormsidali2001/IMRAD-introduction-move-zero-sub-moves-classifier" rel="noopener noreferrer"&gt;Move 0 Sub-move Classifier&lt;/a&gt;, &lt;a href="https://huggingface.co/stormsidali2001/IMRAD-introduction-move-one-sub-moves-classifier" rel="noopener noreferrer"&gt;Move 1 Sub-move Classifier&lt;/a&gt;, and &lt;a href="https://huggingface.co/stormsidali2001/IMRAD-introduction-move-two-sub-moves-classifier" rel="noopener noreferrer"&gt;Move 2 Sub-move Classifier&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Believe me when I tell you that I was bursting with joy when I saw the validation and training accuracy graphs increasing in parallel.&lt;/p&gt;

&lt;p&gt;When the F1 score hit 0.98 for overall move classification and exceeded 0.89 for submove classification, I was literally screaming, "Yes, yes! It's not overfitting, and the F1 score is above 98%!"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpymvn7ufk6ezazh01rgq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpymvn7ufk6ezazh01rgq.png" alt="Training and validation accuracy curves for the fine-tuned BERT move classifier, showing the F1 score progressively reaching 98.21%" width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This previous pipeline was powered by a series of carefully designed prompts for Gemini Pro.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[V2 Prompt]
        +
 [Moves &amp;amp; Sub-moves]
     examples
        |
        v
    (extends)
        |
        +--&amp;gt; [AI Outlier Detection]
        |
        +--&amp;gt; [AI Data Augmentation]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;base prompt&lt;/strong&gt; is the one I used in V2, with some extra ingredients, like enumerating all the moves and submoves and providing several examples for the model to follow for each submove.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;outlier detection&lt;/strong&gt; and &lt;strong&gt;data augmentation&lt;/strong&gt; prompts &lt;strong&gt;are based on&lt;/strong&gt; this &lt;strong&gt;base prompt&lt;/strong&gt; with a few additional modifications on top.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;outlier detection&lt;/strong&gt;  &lt;strong&gt;prompt&lt;/strong&gt; makes Gemini Pro go back over previous predictions and mark them as outliers and filters them or corrects them if they are mistaken.&lt;/p&gt;

&lt;p&gt;Then the &lt;strong&gt;data augmentation prompt&lt;/strong&gt; asks Gemini to generate more sentences for each move, strictly respecting a bunch of rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Microservices-Based SaaS Platform
&lt;/h2&gt;

&lt;p&gt;After publishing the final four &lt;strong&gt;hierarchical classification BERT models&lt;/strong&gt; , I decided to create a &lt;a href="https://github.com/stormsidali2001/graduation_IMRAD_introduction_analysis_SaaS/tree/main" rel="noopener noreferrer"&gt;SaaS platform&lt;/a&gt; that makes full use of these models and helps researchers and students easily analyze the rhetorical structure of scientific paper introductions.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;IMRaD Analysis Platform&lt;/strong&gt; offers the following key features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upload a research paper (PDF) or manually paste introduction text&lt;/li&gt;
&lt;li&gt;Automatic sentence segmentation and classification of IMRaD moves and sub-moves with confidence scores&lt;/li&gt;
&lt;li&gt;Clean and intuitive visual interface for viewing analysis results&lt;/li&gt;
&lt;li&gt;User feedback system allowing corrections to predictions to help improve the model&lt;/li&gt;
&lt;li&gt;Premium features including detailed summaries and AI-generated author thought process&lt;/li&gt;
&lt;li&gt;Full user account management with Stripe subscription handling&lt;/li&gt;
&lt;li&gt;Administrator tools for managing users, reviewing feedback, and monitoring platform statistics&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Microservices Breakdown
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83d8lbajp58py5pie9uf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83d8lbajp58py5pie9uf.png" alt="Microservices architecture component diagram showing all 8 nodes: Nginx API gateway, Eureka discovery server, Next.js frontend and API, FastAPI PDF extractor, TensorFlow Serving, FastAPI LangChain AI analysis service, Express.js user data microservice, and Redis message broker" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's review the key microservices in our platform.&lt;/p&gt;

&lt;p&gt;The whole system is accessed via the &lt;strong&gt;API Gateway&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This component takes care of forwarding and load balancing user requests to our active internal microservice instances.&lt;/p&gt;

&lt;p&gt;All the microservices self-register automatically with the &lt;strong&gt;Eureka Discovery Server&lt;/strong&gt; , which is a dynamic index of all live instances of each service.&lt;/p&gt;

&lt;p&gt;Then we have the &lt;a href="https://github.com/stormsidali2001/graduation_IMRAD_introduction_analysis_SaaS/tree/main" rel="noopener noreferrer"&gt;Next.js microservice,&lt;/a&gt; which is actually composed of two parts:&lt;/p&gt;

&lt;p&gt;First, a frontend, which is responsible for rendering all the individual pages.&lt;/p&gt;

&lt;p&gt;And a backend that does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication and authorisation&lt;/li&gt;
&lt;li&gt;Managing subscriptions via the third-party service Stripe&lt;/li&gt;
&lt;li&gt;It serves as a composition service that calls and aggregates results from multiple internal microservices.&lt;/li&gt;
&lt;li&gt;It doesn't &lt;strong&gt;hardcode the IP and port&lt;/strong&gt; of each microservice, but registers with the &lt;strong&gt;discovery server,&lt;/strong&gt; gets all the active instances of a given microservice, and then does &lt;strong&gt;client-side load balancing&lt;/strong&gt; between the active instances of each called microservice.&lt;/li&gt;
&lt;li&gt;Send transactional emails with the Resend third-party API&lt;/li&gt;
&lt;li&gt;Use of PostgreSQL database for persistent data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then we have the &lt;strong&gt;TensorFlow Serving microservice&lt;/strong&gt; that serves our fine-tuned BERT models with optimized production-grade access.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/stormsidali2001/imrad_intros_moves_submoves_python_microservices" rel="noopener noreferrer"&gt;AI Moves and Sub-moves microservice&lt;/a&gt; does batch prediction for moves and sub-moves by forwarding the requests to TensorFlow Serving. It also connects with the Gemini API to generate the intro summary as well as the hypothetical thought process of the author behind switching between the different moves and sub-moves when writing the intro.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;PDF Extractor microservice&lt;/strong&gt; built on FastAPI that extracts introductions for uploaded research papers is on the same repository.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/stormsidali2001/imrad_introduction_moves_sub_moves_express_user_data" rel="noopener noreferrer"&gt;User Data microservice&lt;/a&gt; stores user predictions and handles feedback. It uses a NoSQL MongoDB database to persist data.&lt;/p&gt;

&lt;p&gt;Finally, &lt;strong&gt;Redis&lt;/strong&gt; is used as an in-memory database and message broker to enable asynchronous communication between the different microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Models
&lt;/h2&gt;

&lt;p&gt;Here’s a quick look at the data model of our platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Next.js microservice&lt;/strong&gt; is responsible for two entities: &lt;strong&gt;User&lt;/strong&gt; and &lt;strong&gt;Subscription.&lt;/strong&gt; It uses a SQL database (PostgreSQL).&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;User Data Microservice&lt;/strong&gt; is responsible for managing three entities: &lt;strong&gt;introduction&lt;/strong&gt; , &lt;strong&gt;sentence,&lt;/strong&gt; and &lt;strong&gt;feedback.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As everything usually gets queried together, I decided to store everything as a MongoDB document.&lt;/p&gt;

&lt;p&gt;Since an introduction can have many sentences, and we usually query them with introductions, we embedded them in a subarray within the introduction document.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;feedback,&lt;/strong&gt; in its turn, also got embedded within the sentence and, therefore, by transitivity, within the introduction document.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;User&lt;/strong&gt; and &lt;strong&gt;Introduction&lt;/strong&gt; entities are in different microservices and have completely different databases, but they are related in a one-to-many relationship (a user can have many introductions) and linked via a foreign key, "introductionId." The introduction document&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gb1gfsh4uhnh3k9ivs7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gb1gfsh4uhnh3k9ivs7.png" alt="Class diagram of the Next.js microservice showing the User and Subscription entities and their relationship" width="693" height="1056"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you've reached this point, I'd like to thank you for your attentive reading of these memories of almost two years ago.&lt;/p&gt;

&lt;p&gt;Honestly, like any other student in my university, the thoughts of missing the graduation deadline, facing judges, and presenting my work in a really short time never left my mind. It was a really rewarding journey and exceptional experience that I want to repeat again. Hopefully when I work with other interesting NLP or advanced deep learning projects.&lt;/p&gt;

&lt;p&gt;After months of studying NLP, reading research papers, and listening to podcasts about it during my walks, I went from having no dataset to building a 169k-sentence annotated dataset, fine-tuning four hierarchical BERT models to an F1 score of 98.21%, and constructing a complete microservices platform.&lt;/p&gt;

&lt;p&gt;If you have any questions or thoughts about all of this, let me know; my DMs are always open. I list all my social links on my contact page.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>career</category>
    </item>
  </channel>
</rss>
