<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Maher Gallaoui</title>
    <description>The latest articles on DEV Community by Maher Gallaoui (@gallaouimaher).</description>
    <link>https://dev.to/gallaouimaher</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gallaouimaher"/>
    <language>en</language>
    <item>
      <title>DocuNext — A Sleek, Minimal Documentation Platform Built with Next.js &amp; MDX 🚀</title>
      <dc:creator>Maher Gallaoui</dc:creator>
      <pubDate>Sat, 06 Dec 2025 11:06:55 +0000</pubDate>
      <link>https://dev.to/gallaouimaher/docunext-a-sleek-minimal-documentation-platform-built-with-nextjs-mdx-1ank</link>
      <guid>https://dev.to/gallaouimaher/docunext-a-sleek-minimal-documentation-platform-built-with-nextjs-mdx-1ank</guid>
      <description>&lt;p&gt;If you’ve ever wanted a documentation system that is fast, flexible, and easy to maintain, let me introduce you to &lt;strong&gt;DocuNext&lt;/strong&gt; — an open-source documentation framework I built using &lt;strong&gt;Next.js&lt;/strong&gt;, &lt;strong&gt;MDX&lt;/strong&gt;, and &lt;strong&gt;Tailwind CSS&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It combines the simplicity of Markdown with the power of React, giving you a docs solution that is both lightweight and fully customizable.&lt;/p&gt;

&lt;h2&gt;
  
  
  ✨ What is DocuNext?
&lt;/h2&gt;

&lt;p&gt;DocuNext is a modern documentation platform that includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MDX support&lt;/strong&gt; — write Markdown and embed React components anytime
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nested sidebar navigation&lt;/strong&gt; — ideal for scaling large documentation
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Table of Contents&lt;/strong&gt; — makes long pages easy to navigate
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Previous / Next page navigation&lt;/strong&gt; — improves reading flow
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client-side full-text search (FlexSearch)&lt;/strong&gt; — instant results without backend
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dark / Light mode&lt;/strong&gt; — with theme tokens and full customization
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static export support&lt;/strong&gt; — deploy anywhere (GitHub Pages, Netlify, Vercel, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can explore the repository here:&lt;br&gt;&lt;br&gt;
👉 &lt;strong&gt;&lt;a href="https://github.com/gallaouim/Docunext" rel="noopener noreferrer"&gt;https://github.com/gallaouim/Docunext&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Why I Built DocuNext
&lt;/h2&gt;

&lt;p&gt;I needed a documentation system that was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple
&lt;/li&gt;
&lt;li&gt;MDX-powered
&lt;/li&gt;
&lt;li&gt;Lightweight
&lt;/li&gt;
&lt;li&gt;Themeable
&lt;/li&gt;
&lt;li&gt;Easy to deploy anywhere
&lt;/li&gt;
&lt;li&gt;Not as heavy or locked-in as existing documentation frameworks
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything out there was either too complex, too tied to a specific ecosystem, or not flexible enough.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;DocuNext&lt;/strong&gt;, focused on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean architecture
&lt;/li&gt;
&lt;li&gt;Easy writing experience
&lt;/li&gt;
&lt;li&gt;Great performance
&lt;/li&gt;
&lt;li&gt;React-powered extensibility
&lt;/li&gt;
&lt;li&gt;Zero server requirements (static export only)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nextjs</category>
      <category>documentation</category>
      <category>mdx</category>
    </item>
    <item>
      <title>Remove Image Backgrounds in the Browser - No Backend Needed 🚀</title>
      <dc:creator>Maher Gallaoui</dc:creator>
      <pubDate>Fri, 28 Nov 2025 21:16:47 +0000</pubDate>
      <link>https://dev.to/gallaouimaher/remove-image-backgrounds-in-the-browser-no-backend-needed-4o10</link>
      <guid>https://dev.to/gallaouimaher/remove-image-backgrounds-in-the-browser-no-backend-needed-4o10</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Removing backgrounds from images is a common need. but many solutions require uploading your image to a server. What if you could run it fully in the browser, with no backend, no API keys, and full privacy? That’s exactly what this project does.&lt;/p&gt;

&lt;p&gt;Using &lt;strong&gt;Transformers.js&lt;/strong&gt;, I built a background removal web app that runs client-side. The model runs locally in your browser, and the processed image never leaves your device.&lt;/p&gt;

&lt;p&gt;In this post, I walk you through it step by step, from project setup to UI, and finally to background-removal logic.&lt;/p&gt;

&lt;p&gt;💻 You can access the full source code on GitHub and try it yourself: &lt;a href="https://github.com/gallaouim/bg-remover" rel="noopener noreferrer"&gt;bg-remover Repository&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Node.js (v16+ recommended)&lt;/li&gt;
&lt;li&gt;npm or yarn&lt;/li&gt;
&lt;li&gt;A modern browser (for WASM or WebGPU — depending on your config)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Initialise the project with Vite
&lt;/h2&gt;

&lt;p&gt;To get started quickly, I used Vite to scaffold a minimal web project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm create vite@latest bg-remover -- --template vanilla
cd bg-remover
npm install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’ll assume vanilla JS, HTML and CSS for simplicity.&lt;/p&gt;

&lt;p&gt;After installing the dependencies, you can now start the server with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Install Transformers.js
&lt;/h2&gt;

&lt;p&gt;Transformers.js brings the power of Hugging Face,running segmentation, classification directly in the browser using ONNX runtime under the hood. So you have to add the NPM module with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install @huggingface/transformers

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Build the HTML &amp;amp; UI
&lt;/h2&gt;

&lt;p&gt;We want get into the details Here as it's actually a very simple UI that you can see here , you will find below the links to the html and style files&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmo4hwjlftru0ydvf5y8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmo4hwjlftru0ydvf5y8.png" alt=" " width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;HTML : &lt;a href="https://github.com/gallaouim/bg-remover/blob/main/index.html" rel="noopener noreferrer"&gt;index.html&lt;/a&gt; &lt;br&gt;
Style: &lt;a href="https://github.com/gallaouim/bg-remover/blob/main/src/style.css" rel="noopener noreferrer"&gt;Style.css&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  4. Main Logic: Load model, process image, remove background
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Step 1: Import Transformers.js modules
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo781c3xsylltxcv6us6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo781c3xsylltxcv6us6w.png" alt=" " width="800" height="42"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;AutoProcessor&lt;/code&gt; → handles preprocessing of your input image to the format the model expects.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AutoModel&lt;/code&gt; → loads the actual model that will perform the task (background removal / segmentation).
Basically, the processor prepares the image, the model predicts the foreground mask.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 2: Initialize the Model and Processor
&lt;/h3&gt;

&lt;p&gt;In this step, we load the pretrained background-removal model and its processor. Both are configurable, which means you can tweak preprocessing settings, input size, normalization, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fxk44oez350s8c76hs1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7fxk44oez350s8c76hs1.png" alt=" " width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;model_type&lt;/code&gt;: "custom" → allows you to load models that don’t include a standard config.json.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;do_normalize / image_mean / image_std&lt;/code&gt; → normalize pixel values for better model accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;do_resize / size&lt;/code&gt; → resize input images to the model’s expected dimensions.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;do_rescale / rescale_factor&lt;/code&gt; → scale pixel values from 0–255 to 0–1.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;resample&lt;/code&gt; → resizing algorithm&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key idea: By changing these config values, you can adapt the preprocessing to different models or custom training data without changing the model code itself&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 3: Listen for image uploads
&lt;/h3&gt;

&lt;p&gt;We attach a listener to the file input element &lt;code&gt;(&amp;lt;input type="file"&amp;gt;)&lt;/code&gt;that triggers when the user selects an image. An Image object is created from the file, using &lt;code&gt;URL.createObjectURL(file)&lt;/code&gt; to generate a temporary local URL without uploading anything. Once the image is loaded &lt;code&gt;(img.onload)&lt;/code&gt;, we call &lt;code&gt;predict()&lt;/code&gt; with the image to perform background removal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib9h6mm1ljjeznyp7678.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib9h6mm1ljjeznyp7678.png" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 4: Remove the background
&lt;/h3&gt;

&lt;p&gt;Once the user uploads an image, we can preprocess it, run the model, and apply the background mask:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0nre0o0qr65ejmmgtd80.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0nre0o0qr65ejmmgtd80.png" alt=" " width="800" height="761"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Preprocessing: Converts the uploaded image into the format expected by the model.&lt;/li&gt;
&lt;li&gt;Prediction: The model outputs a mask indicating the foreground.&lt;/li&gt;
&lt;li&gt;Post-processing: The mask is resized to the original image size and applied as an alpha channel to remove the background.&lt;/li&gt;
&lt;li&gt;Display: The final image is drawn on a &lt;code&gt;&amp;lt;canvas&amp;gt;&lt;/code&gt;for the user to see.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  5. Try It Yourself
&lt;/h3&gt;

&lt;p&gt;Clone the repository and run it locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/gallaouim/bg-remover.git
cd bg-remover
npm install
npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upload an image and watch the background disappear directly in your browser!&lt;/p&gt;

&lt;h2&gt;
  
  
  ✅ Summary
&lt;/h2&gt;

&lt;p&gt;You’ve learned how to build a fully client-side background removal app using Transformers.js. The key steps are: load a model, preprocess the image, predict a mask, and render the final image on a canvas&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
