<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: admin@adawati.app</title>
    <description>The latest articles on DEV Community by admin@adawati.app (@adawati).</description>
    <link>https://dev.to/adawati</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adawati"/>
    <language>en</language>
    <item>
      <title>I got tired of bloated writing apps, so I built a Zero-Latency, Privacy-First Document Editor ⚡</title>
      <dc:creator>admin@adawati.app</dc:creator>
      <pubDate>Tue, 14 Apr 2026 10:56:14 +0000</pubDate>
      <link>https://dev.to/adawati/i-got-tired-of-bloated-writing-apps-so-i-built-a-zero-latency-privacy-first-document-editor-93j</link>
      <guid>https://dev.to/adawati/i-got-tired-of-bloated-writing-apps-so-i-built-a-zero-latency-privacy-first-document-editor-93j</guid>
      <description>&lt;p&gt;Modern writing tools are amazing, but let’s be honest: they have become incredibly bloated. Opening a simple document today often means loading megabytes of JavaScript, waiting for server syncs, and accepting that every single keystroke is being tracked and stored in a database somewhere.&lt;/p&gt;

&lt;p&gt;Sometimes, you just need a fast, clean, and private space to write your thoughts, draft a blog post, or structure a document without the overhead.&lt;/p&gt;

&lt;p&gt;That’s exactly why I built &lt;a href="https://adawati.app/en/docs/" rel="noopener noreferrer"&gt;Adawati Docs&lt;/a&gt; — a lightweight, distraction-free document editor that runs directly in your browser.&lt;/p&gt;

&lt;p&gt;You can try it live here: &lt;a href="https://adawati.app/en/docs/" rel="noopener noreferrer"&gt;Adawati Docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🛠️ The Core Philosophy: Speed and Privacy&lt;br&gt;
When designing this editor, I had two strict rules:&lt;/p&gt;

&lt;p&gt;Zero Latency: The editor must load instantly. No waiting for spinners or database connections.&lt;/p&gt;

&lt;p&gt;Absolute Privacy: What you write is your business. The text should not be sent to any backend server for rendering or storage.&lt;/p&gt;

&lt;p&gt;💻 Technical Highlights&lt;br&gt;
To achieve this, I leaned heavily on a Client-Side First architecture. Here is how it works under the hood:&lt;/p&gt;

&lt;p&gt;Browser as the Backend: Instead of relying on a database to save drafts, the editor leverages local storage mechanisms. Your document lives in your browser's memory. If you close the tab and come back, your work is exactly where you left it—all without a single API call.&lt;/p&gt;

&lt;p&gt;Rich Formatting, Zero Bloat: Implementing a rich text editor that supports essential formatting (headings, lists, bolding) without turning the app into a sluggish monster was a challenge. By keeping the DOM manipulations clean and avoiding heavy third-party WYSIWYG dependencies, the typing experience remains buttery smooth.&lt;/p&gt;

&lt;p&gt;Exporting Made Simple: Once you are done writing, you need to get your text out efficiently. The tool allows you to format your document and copy it cleanly, preserving the structure so you can paste it directly into your CMS, emails, or markdown files without weird inline CSS issues.&lt;/p&gt;

&lt;p&gt;🌍 Fully Localized&lt;br&gt;
Just like the rest of the &lt;a href="https://adawati.app/en/" rel="noopener noreferrer"&gt;Adawati&lt;/a&gt; platform, I made sure the editor handles both English (LTR) and Arabic (RTL) perfectly. Handling bi-directional text gracefully in an editor requires strict attention to CSS logical properties to ensure the cursor and text alignment don't break when switching languages.&lt;/p&gt;

&lt;p&gt;🚀 What's Next?&lt;br&gt;
Building &lt;a href="https://adawati.app/en/docs/" rel="noopener noreferrer"&gt;Adawati Docs&lt;/a&gt; has been a great exercise in performance optimization and trusting the client's browser capabilities. My next goal is to add direct Markdown-to-PDF exporting entirely on the client-side.&lt;/p&gt;

&lt;p&gt;I would love for you to test the editor and try writing your next post or notes on it: &lt;a href="https://adawati.app/en/docs/" rel="noopener noreferrer"&gt;Adawati Docs/Editor&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What is your current go-to tool for quick, distraction-free writing? Let me know in the comments!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>productivity</category>
      <category>javascript</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>How I Built a 100% Client-Side, ATS-Friendly CV Builder (Zero Backend for Ultimate Privacy</title>
      <dc:creator>admin@adawati.app</dc:creator>
      <pubDate>Sat, 04 Apr 2026 18:57:05 +0000</pubDate>
      <link>https://dev.to/adawati/how-i-built-a-100-client-side-ats-friendly-cv-builder-zero-backend-for-ultimate-privacy-4g8g</link>
      <guid>https://dev.to/adawati/how-i-built-a-100-client-side-ats-friendly-cv-builder-zero-backend-for-ultimate-privacy-4g8g</guid>
      <description>&lt;p&gt;Have you ever spent an hour filling out your details on a "free" resume builder, only to be hit with a $15 paywall or a massive, ugly watermark right when you click download?&lt;/p&gt;

&lt;p&gt;As an indie developer, I found this practice incredibly frustrating. Beyond the hidden fees, there's a massive privacy concern: why should a random server store my phone number, address, and entire employment history just to generate a PDF?&lt;/p&gt;

&lt;p&gt;So, I decided to build my own free alternative on &lt;a href="https://adawati.app/" rel="noopener noreferrer"&gt;Adawati.app&lt;/a&gt;: a fast, ATS-friendly CV builder that operates 100% on the client-side. No backend storage, no hidden fees.&lt;/p&gt;

&lt;p&gt;Here is how I approached the technical challenges.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Core Architecture: Zero Backend
The primary goal was privacy. By eliminating the backend from the PDF generation process, the user's sensitive data never leaves their browser.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Instead of sending JSON payloads to a Node.js or Python server to render a PDF using tools like Puppeteer or ReportLab, the entire heavy lifting is done in the DOM.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The ATS (Applicant Tracking System) Challenge
A major trap with client-side PDF generation is taking the easy route: converting a DOM element to an HTML canvas (e.g., using html2canvas) and then putting that image into a PDF.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Why is this bad? Because ATS bots cannot read images! If your resume is just a giant image inside a PDF wrapper, you will automatically be rejected by 75% of corporate filtering systems.&lt;/p&gt;

&lt;p&gt;The Solution:&lt;br&gt;
I had to ensure the output was a true vector PDF with selectable text. Using client-side libraries that construct the PDF document programmatically ensures that the text layer is preserved. The file remains lightweight, the text is razor-sharp (no pixelation on zoom), and most importantly, an ATS can easily parse the keywords, job titles, and dates.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Handling RTL (Right-to-Left) and Localization&lt;br&gt;
Since the platform targets both Arabic and English speakers, the layout had to mirror perfectly without breaking the PDF generation flow.&lt;br&gt;
Switching between languages triggers a state change that not only flips the CSS (direction: rtl) for the live preview but also maps to the corresponding coordinates and text alignments in the PDF generation logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Result&lt;br&gt;
The final product allows users to:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Type their data and see a Live Preview.&lt;/p&gt;

&lt;p&gt;Switch between English and Arabic seamlessly.&lt;/p&gt;

&lt;p&gt;Download a clean, ATS-compliant PDF instantly.&lt;/p&gt;

&lt;p&gt;You can try the live tool here: &lt;a href="https://adawati.app/cv-builder/" rel="noopener noreferrer"&gt;Adawati CV Builder&lt;/a&gt;. Turn off your Wi-Fi before clicking download if you want to test the "Client-Side only" claim! 😉&lt;/p&gt;

&lt;p&gt;What’s Next?&lt;br&gt;
Building tools that respect user privacy shouldn't be the exception; it should be the norm. My next challenge is optimizing the client-side PDF compression.&lt;/p&gt;

&lt;p&gt;Have you worked with client-side PDF generation before? What libraries did you find most reliable for handling complex layouts and custom fonts? Let me know in the comments!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>privacy</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>🚀 Building a High-Accuracy Arabic OCR Tool: How I Solved the "Image-to-Text" Challenge</title>
      <dc:creator>admin@adawati.app</dc:creator>
      <pubDate>Thu, 26 Mar 2026 21:02:43 +0000</pubDate>
      <link>https://dev.to/adawati/building-a-high-accuracy-arabic-ocr-tool-how-i-solved-the-image-to-text-challenge-52dc</link>
      <guid>https://dev.to/adawati/building-a-high-accuracy-arabic-ocr-tool-how-i-solved-the-image-to-text-challenge-52dc</guid>
      <description>&lt;p&gt;Extraction of text from images (OCR) is a solved problem for Latin languages, but for Arabic, it’s a whole different story. As the developer behind &lt;a href="https://adawati.app/" rel="noopener noreferrer"&gt;Adawati.app&lt;/a&gt;, I spent weeks engineering a solution that doesn't just "read" Arabic, but understands its complexity.&lt;/p&gt;

&lt;p&gt;The Problem: Why Arabic OCR is Hard&lt;br&gt;
Most open-source OCR engines struggle with Arabic for three reasons:&lt;/p&gt;

&lt;p&gt;Cursive Nature: Arabic letters change shape based on their position (Start, Middle, End).&lt;/p&gt;

&lt;p&gt;Diacritics &amp;amp; Dots: Small dots and marks can change the entire meaning of a word.&lt;/p&gt;

&lt;p&gt;Low-Quality Input: Students often take photos of textbooks in poor lighting or at weird angles.&lt;/p&gt;

&lt;p&gt;My Engineering Approach&lt;br&gt;
Instead of just "plugging in" a generic API, I built a pipeline focused on Pre-processing and Contextual Inference.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Image Pre-processing (The Secret Sauce)
Before the AI even looks at the image, I apply several filters:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Binarization: Converting the image to high-contrast black and white to eliminate background noise.&lt;/p&gt;

&lt;p&gt;Deskewing: Automatically correcting the angle if the photo was taken tilted.&lt;/p&gt;

&lt;p&gt;Noise Reduction: Removing "salt and pepper" noise often found in scanned PDFs.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The AI Engine&lt;br&gt;
I utilized state-of-the-art deep learning models specifically fine-tuned for Arabic scripts. These models use CNNs (Convolutional Neural Networks) for visual feature extraction and LSTMs (Long Short-Term Memory) to understand the sequence of characters, ensuring that the connected letters are recognized as a coherent word, not just random symbols.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Privacy-First Architecture&lt;br&gt;
In an era of data harvesting, I made a conscious architectural decision: Zero Retention.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Images are processed in a secure memory buffer.&lt;/p&gt;

&lt;p&gt;Once the text is extracted, the image is purged instantly.&lt;/p&gt;

&lt;p&gt;No databases, no logs of your documents.&lt;/p&gt;

&lt;p&gt;Why I Built This?&lt;br&gt;
I saw students struggling to transcribe their lectures and researchers stuck with non-searchable PDF archives. I wanted to provide a free, fast, and no-login tool that respects their privacy while delivering professional-grade accuracy.&lt;/p&gt;

&lt;p&gt;Try it out&lt;br&gt;
If you're a developer interested in Arabic NLP or a student looking for a reliable tool, check it out here:&lt;br&gt;
&lt;a href="https://adawati.app/image-to-text/" rel="noopener noreferrer"&gt;👉 Image to Text - Arabic OCR Tool&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'd love to hear your feedback on the accuracy, especially with complex fonts or handwritten notes!&lt;/p&gt;

&lt;h1&gt;
  
  
  Arabic #OCR #AI #WebDev #Productivity #NextJS
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>title: Building a Free Arabic Speech-to-Text Engine using Hugging Face &amp; Next.js</title>
      <dc:creator>admin@adawati.app</dc:creator>
      <pubDate>Wed, 25 Mar 2026 13:06:10 +0000</pubDate>
      <link>https://dev.to/adawati/title-building-a-free-arabic-speech-to-text-engine-using-hugging-face-nextjs-29b9</link>
      <guid>https://dev.to/adawati/title-building-a-free-arabic-speech-to-text-engine-using-hugging-face-nextjs-29b9</guid>
      <description>&lt;p&gt;`&lt;/p&gt;

&lt;p&gt;Hello fellow developers! 👋&lt;/p&gt;

&lt;p&gt;Handling audio processing in web applications is always tricky, but when you add &lt;strong&gt;Arabic dialects and academic terminology (Arabizi)&lt;/strong&gt; to the mix, it becomes a real engineering challenge.&lt;/p&gt;

&lt;p&gt;Recently, while building &lt;a href="https://adawati.app" rel="noopener noreferrer"&gt;Adawati.app&lt;/a&gt; (an all-in-one digital workspace for Arab students), I needed to implement a reliable Speech-to-Text (STT) feature for university lectures. Paid APIs like Google Cloud or AWS were either too expensive for a free tool or struggled heavily with local Arabic dialects. &lt;/p&gt;

&lt;p&gt;Here is how I engineered a custom, free solution using &lt;strong&gt;Hugging Face&lt;/strong&gt; open-source models.&lt;/p&gt;

&lt;h3&gt;
  
  
  🛑 The Technical Bottlenecks
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Large File Uploads &amp;amp; Timeouts:&lt;/strong&gt; University lectures are often 1-2 hours long. Sending a 100MB audio file to a server in one go usually results in a &lt;code&gt;504 Gateway Timeout&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Background Noise:&lt;/strong&gt; Lecture halls are noisy. Passing raw audio to an AI model drastically reduces transcription accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dialect Nuances:&lt;/strong&gt; Standard Arabic models fail when professors mix English technical terms with local Arabic dialects.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  ⚙️ The Architecture &amp;amp; Solution
&lt;/h3&gt;

&lt;p&gt;To bypass these issues, I built a pipeline that processes the audio efficiently:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Audio Chunking (The Game Changer)&lt;/strong&gt;&lt;br&gt;
Instead of sending the whole file, I used the Web Audio API on the client-side to split the audio into smaller 30-second chunks before sending them to the backend. This prevents timeouts and allows parallel processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Pre-processing &amp;amp; Noise Reduction&lt;/strong&gt;&lt;br&gt;
Before hitting the AI model, the chunks go through a basic noise-reduction filter using &lt;code&gt;FFmpeg&lt;/code&gt; to isolate human voice frequencies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Hugging Face Inference&lt;/strong&gt;&lt;br&gt;
I connected the backend to a fine-tuned Whisper model hosted on Hugging Face, specifically trained on Arabic datasets. &lt;/p&gt;

&lt;p&gt;Here is a conceptual snippet of how the chunking logic looks in the backend (Python/FastAPI wrapper):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`python&lt;br&gt;
from pydub import AudioSegment&lt;br&gt;
import requests&lt;/p&gt;

&lt;p&gt;def process_large_audio(file_path):&lt;br&gt;
    audio = AudioSegment.from_file(file_path)&lt;br&gt;
    chunk_length_ms = 30000 # 30 seconds&lt;br&gt;
    chunks = [audio[i:i+chunk_length_ms] for i in range(0, len(audio), chunk_length_ms)]&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;full_transcript = ""

for idx, chunk in enumerate(chunks):
    chunk.export(f"temp_chunk_{idx}.wav", format="wav")
    # Send to Hugging Face API
    transcript = query_huggingface(f"temp_chunk_{idx}.wav")
    full_transcript += transcript + " "

return full_transcript`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;🚀 The Live Result&lt;br&gt;
By combining this chunking architecture with Hugging Face models, I managed to create a fast, accurate, and completely free lecture transcription tool without relying on expensive enterprise APIs.&lt;/p&gt;

&lt;p&gt;You can test the live implementation and its accuracy with Arabic audio here:&lt;br&gt;
👉 Arabic Audio-to-Text Converter - Adawati&lt;br&gt;
&lt;a href="https://adawati.app/audio-to-text/" rel="noopener noreferrer"&gt;https://adawati.app/audio-to-text/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💬 Let's Discuss!&lt;br&gt;
I'm curious to know from the backend engineers here:&lt;br&gt;
How do you handle massive file uploads in your Next.js/Node.js applications? Do you prefer client-side chunking or streaming directly to a cloud bucket (like AWS S3) before processing?&lt;/p&gt;

&lt;p&gt;Let me know in the comments!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>nextjs</category>
      <category>nlp</category>
      <category>webdev</category>
    </item>
    <item>
      <title>كيف قمت بحل مشكلة انعكاس الحروف العربية (RTL) عند استخراج</title>
      <dc:creator>admin@adawati.app</dc:creator>
      <pubDate>Tue, 24 Mar 2026 14:01:54 +0000</pubDate>
      <link>https://dev.to/adawati/kyf-qmt-bhl-mshkl-nks-lhrwf-lrby-rtl-nd-stkhrj-4pm7</link>
      <guid>https://dev.to/adawati/kyf-qmt-bhl-mshkl-nks-lhrwf-lrby-rtl-nd-stkhrj-4pm7</guid>
      <description>&lt;p&gt;`---&lt;br&gt;
title: "كيف قمت بحل مشكلة انعكاس الحروف العربية (RTL) عند استخراج النصوص من ملفات PDF"&lt;br&gt;
published: true&lt;/p&gt;

&lt;h2&gt;
  
  
  tags: webdev, pdf, arabic, programming
&lt;/h2&gt;

&lt;p&gt;مرحباً أصدقائي المطورين،&lt;/p&gt;

&lt;p&gt;أي شخص حاول التعامل مع ملفات الـ PDF برمجياً لاستخراج النصوص العربية يعرف تماماً هذا الكابوس: تسحب النص فتجده معكوساً (م ر ح ب ا)، أو تجد الحروف متقطعة وغير متصلة. &lt;/p&gt;

&lt;p&gt;خلال عملي مؤخراً على بناء أدوات لمعالجة المستندات للطلاب، واجهت هذه المشكلة بعمق. حاولت استخدام مكتبات شهيرة مثل &lt;code&gt;pdf.js&lt;/code&gt; أو &lt;code&gt;PyPDF2&lt;/code&gt;، لكن النتيجة مع اللغة العربية كانت دائماً كارثية.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔍 ما هو جذر المشكلة؟
&lt;/h3&gt;

&lt;p&gt;المشكلة تكمن في أن ملفات الـ PDF لا تفهم "الكلمات" أو "الفقرات". هي ببساطة تتعامل مع المستند كـ "لوحة رسم" (Canvas)، وتقوم بوضع كل حرف في إحداثيات (X, Y) محددة. اللغات اللاتينية تُكتب من اليسار لليمين (LTR)، ومحركات الاستخراج تقرأ الإحداثيات بهذا الترتيب. أما العربية (RTL)، فيتم رسمها من اليمين لليسار، وعندما يقرأها المحرك برمجياً من اليسار لليمين، تخرج الكلمة معكوسة!&lt;/p&gt;

&lt;h3&gt;
  
  
  ⚙️ كيف قمت بحلها؟
&lt;/h3&gt;

&lt;p&gt;بدلاً من الاعتماد على القراءة النصية التقليدية للإحداثيات، قمت بدمج مكتبات إعادة التشكيل (Reshaping) وخوارزميات ثنائية الاتجاه (Bidi Algorithm) كطبقة معالجة وسيطة، بالإضافة للاستعانة بنماذج ذكاء اصطناعي عبر Hugging Face متخصصة في الـ OCR لمعالجة الحالات المستعصية.&lt;/p&gt;

&lt;p&gt;الفكرة البرمجية المبسطة تعتمد على تمرير النص المستخرج عبر هذه الفلاتر:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;/code&gt;`python&lt;br&gt;
import arabic_reshaper&lt;br&gt;
from bidi.algorithm import get_display&lt;/p&gt;

&lt;p&gt;def fix_arabic_pdf_text(raw_text):&lt;br&gt;
    # 1. إعادة ربط الحروف العربية المتقطعة&lt;br&gt;
    reshaped_text = arabic_reshaper.reshape(raw_text)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# 2. تصحيح اتجاه القراءة من اليمين لليسار
bidi_text = get_display(reshaped_text)

return bidi_text`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;🚀 النتيجة الحية (التطبيق العملي):&lt;/li&gt;
&lt;li&gt;لتسهيل الأمر على الطلاب والزملاء، قمت بدمج هذه الخوارزمية المعقدة في واجهة ويب بسيطة ضمن منصة طورتها مؤخراً.&lt;/li&gt;
&lt;li&gt;يمكنكم رؤية النتيجة الحية واختبار سرعة ودقة استخراج النصوص العربية (سواء من ملفات PDF أو الصور) عبر هذه الأداة المجانية:&lt;/li&gt;
&lt;li&gt;👉 أداة استخراج النصوص (OCR) - منصة أدواتي&lt;/li&gt;
&lt;li&gt;أتمنى أن تفيدكم هذه المقاربة في مشاريعكم القادمة. كيف تتعاملون أنتم مع ملفات الـ PDF العربية في تطبيقاتكم؟&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
