<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Farhan Munir</title>
    <description>The latest articles on DEV Community by Farhan Munir (@munirfarhan).</description>
    <link>https://dev.to/munirfarhan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/munirfarhan"/>
    <language>en</language>
    <item>
      <title>✅ Milestone Completed: Prometheus Data Format Integration</title>
      <dc:creator>Farhan Munir</dc:creator>
      <pubDate>Sun, 19 Apr 2026 07:36:58 +0000</pubDate>
      <link>https://dev.to/munirfarhan/milestone-completed-prometheus-data-format-integration-12l8</link>
      <guid>https://dev.to/munirfarhan/milestone-completed-prometheus-data-format-integration-12l8</guid>
      <description>&lt;h1&gt;
  
  
  ✅ Milestone Completed: Prometheus Data Format Integration
&lt;/h1&gt;

&lt;p&gt;This sprint focused on making telemetry output fully Prometheus-compatible while preserving our existing collector pipeline for CPU, memory, and disk I/O.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I implemented
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Added Prometheus text exposition format (v0.0.4) output&lt;/li&gt;
&lt;li&gt;Included &lt;code&gt;# HELP&lt;/code&gt; and &lt;code&gt;# TYPE&lt;/code&gt; metadata&lt;/li&gt;
&lt;li&gt;Supported &lt;code&gt;gauge&lt;/code&gt; and &lt;code&gt;counter&lt;/code&gt; metric families&lt;/li&gt;
&lt;li&gt;Added label-based dimensions (CPU modes, disk devices)&lt;/li&gt;
&lt;li&gt;Kept output deterministic and scrape-ready for a &lt;code&gt;/metrics&lt;/code&gt; style endpoint&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Metrics now covered
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;CPU usage and per-mode CPU time distribution&lt;/li&gt;
&lt;li&gt;Virtual memory: used, available, total&lt;/li&gt;
&lt;li&gt;Swap memory: used, total&lt;/li&gt;
&lt;li&gt;Disk I/O bytes: read/write (aggregate + per-device)&lt;/li&gt;
&lt;li&gt;Disk I/O operations: read/write (aggregate + per-device)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Validation results
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Output verified against Prometheus exposition rules&lt;/li&gt;
&lt;li&gt;No syntax violations found&lt;/li&gt;
&lt;li&gt;Metrics are parseable and scrape-ready&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prior milestone context (OpenMetrics)
&lt;/h2&gt;

&lt;p&gt;Before this, I shipped OpenMetrics-aligned output for the same core Linux host metrics with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;# HELP&lt;/code&gt;, &lt;code&gt;# TYPE&lt;/code&gt;, &lt;code&gt;# UNIT&lt;/code&gt; metadata&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;# EOF&lt;/code&gt; termination&lt;/li&gt;
&lt;li&gt;Valid CPU, memory, and disk payload output&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next improvements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Evaluate ratio-based CPU values (0–1) vs percentage&lt;/li&gt;
&lt;li&gt;Extend optional unit handling for stronger tooling interoperability&lt;/li&gt;
&lt;li&gt;Continue OpenMetrics alignment where needed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/ronin1770/heka-insights-agent" rel="noopener noreferrer"&gt;https://github.com/ronin1770/heka-insights-agent&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>telemetry</category>
      <category>openmetrics</category>
      <category>prometheus</category>
    </item>
    <item>
      <title>Milestone 2: Standardizing Telemetry Output with JSON, Prometheus, and OpenMetrics</title>
      <dc:creator>Farhan Munir</dc:creator>
      <pubDate>Thu, 16 Apr 2026 15:25:23 +0000</pubDate>
      <link>https://dev.to/munirfarhan/milestone-2-standardizing-telemetry-output-with-json-prometheus-and-openmetrics-22ec</link>
      <guid>https://dev.to/munirfarhan/milestone-2-standardizing-telemetry-output-with-json-prometheus-and-openmetrics-22ec</guid>
      <description>&lt;h1&gt;
  
  
  Milestone 2: Standardizing Telemetry Output with JSON, Prometheus, and OpenMetrics
&lt;/h1&gt;

&lt;p&gt;In this milestone, we are focusing on one thing only: &lt;strong&gt;data format standardization&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The Heka Insights Agent already collects CPU, memory, and disk telemetry.&lt;br&gt;&lt;br&gt;
Now the goal is to emit the same logical metrics in three standard output formats:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JSON&lt;/li&gt;
&lt;li&gt;Prometheus text exposition&lt;/li&gt;
&lt;li&gt;OpenMetrics text format&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why This Milestone Matters
&lt;/h2&gt;

&lt;p&gt;If an agent has no clear format strategy, every downstream integration becomes custom work.&lt;br&gt;&lt;br&gt;
That slows down adoption and increases maintenance cost.&lt;/p&gt;

&lt;p&gt;By standardizing format early, we get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;stable contracts for integrations&lt;/li&gt;
&lt;li&gt;easier validation and testing&lt;/li&gt;
&lt;li&gt;portability across observability stacks&lt;/li&gt;
&lt;li&gt;clearer boundaries between collection and export&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Milestone Scope (Only Data Format)
&lt;/h2&gt;

&lt;p&gt;This milestone does not include transports, retry logic, or backend adapters.&lt;br&gt;&lt;br&gt;
It only covers how telemetry is represented and serialized.&lt;/p&gt;

&lt;p&gt;Included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;canonical internal metric model&lt;/li&gt;
&lt;li&gt;naming/type/unit rules&lt;/li&gt;
&lt;li&gt;serializers for &lt;code&gt;json&lt;/code&gt;, &lt;code&gt;prometheus&lt;/code&gt;, &lt;code&gt;openmetrics&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;deterministic output behavior&lt;/li&gt;
&lt;li&gt;contract tests with golden files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Out of scope:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Datadog/New Relic senders&lt;/li&gt;
&lt;li&gt;batching/compression/persistence&lt;/li&gt;
&lt;li&gt;new collector domains&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Canonical Metric Contract
&lt;/h2&gt;

&lt;p&gt;Every metric will be representable through one shared contract:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;name&lt;/code&gt; (string)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;description&lt;/code&gt; (string)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;type&lt;/code&gt; (&lt;code&gt;gauge&lt;/code&gt; or &lt;code&gt;counter&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;unit&lt;/code&gt; (e.g. &lt;code&gt;bytes&lt;/code&gt;, &lt;code&gt;seconds&lt;/code&gt;, &lt;code&gt;percent&lt;/code&gt;, &lt;code&gt;count&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;value&lt;/code&gt; (number)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;labels&lt;/code&gt; (map of string to string; empty allowed)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;timestamp_unix_ms&lt;/code&gt; (optional integer)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This contract is the core design decision in Milestone 2.&lt;br&gt;&lt;br&gt;
Serializers consume this model and render format-specific output without changing metric meaning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Naming and Semantics Rules
&lt;/h2&gt;

&lt;p&gt;To keep the output stable and machine-friendly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;metric names are lowercase snake_case&lt;/li&gt;
&lt;li&gt;all names are prefixed with &lt;code&gt;heka_&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;counters end in &lt;code&gt;_total&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;unit suffixes are explicit (&lt;code&gt;_bytes&lt;/code&gt;, &lt;code&gt;_seconds&lt;/code&gt;, &lt;code&gt;_percent&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;label keys are lowercase snake_case&lt;/li&gt;
&lt;li&gt;metric identity must stay consistent across formats&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Current Metric Mapping
&lt;/h2&gt;

&lt;p&gt;Initial canonical mapping includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;heka_cpu_usage_percent&lt;/code&gt; (gauge)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heka_cpu_time_percent&lt;/code&gt; (gauge with &lt;code&gt;mode=&amp;lt;field&amp;gt;&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heka_memory_virtual_used_bytes&lt;/code&gt; (gauge)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heka_memory_virtual_available_bytes&lt;/code&gt; (gauge)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heka_memory_virtual_total_bytes&lt;/code&gt; (gauge)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heka_memory_swap_used_bytes&lt;/code&gt; (gauge)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heka_memory_swap_total_bytes&lt;/code&gt; (gauge)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heka_disk_read_bytes_total&lt;/code&gt; (counter)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heka_disk_write_bytes_total&lt;/code&gt; (counter)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heka_disk_reads_total&lt;/code&gt; (counter)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;heka_disk_writes_total&lt;/code&gt; (counter)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Format-Specific Requirements
&lt;/h2&gt;

&lt;h3&gt;
  
  
  JSON
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;UTF-8 JSON object&lt;/li&gt;
&lt;li&gt;includes &lt;code&gt;schema_version&lt;/code&gt; (starting at &lt;code&gt;v1&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;includes &lt;code&gt;generated_at&lt;/code&gt; (RFC3339 UTC)&lt;/li&gt;
&lt;li&gt;includes top-level &lt;code&gt;metrics&lt;/code&gt; array&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prometheus
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus text exposition format (&lt;code&gt;0.0.4&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;include &lt;code&gt;# HELP&lt;/code&gt; and &lt;code&gt;# TYPE&lt;/code&gt; lines&lt;/li&gt;
&lt;li&gt;deterministic label ordering&lt;/li&gt;
&lt;li&gt;no OpenMetrics-only directives&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  OpenMetrics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;OpenMetrics text format&lt;/li&gt;
&lt;li&gt;include &lt;code&gt;# HELP&lt;/code&gt;, &lt;code&gt;# TYPE&lt;/code&gt;, and &lt;code&gt;# UNIT&lt;/code&gt; when known&lt;/li&gt;
&lt;li&gt;terminate payload with &lt;code&gt;# EOF&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;metric names and labels remain aligned with Prometheus mode&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Configuration Contract
&lt;/h2&gt;

&lt;p&gt;One selector controls serialization:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;OUTPUT_FORMAT=json|prometheus|openmetrics&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;default: &lt;code&gt;json&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;invalid values: fail fast with a clear startup error&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Acceptance Criteria
&lt;/h2&gt;

&lt;p&gt;Milestone 2 is done when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;same logical metric set is emitted in all three formats&lt;/li&gt;
&lt;li&gt;names/types/units are consistent&lt;/li&gt;
&lt;li&gt;Prometheus and OpenMetrics outputs validate&lt;/li&gt;
&lt;li&gt;JSON includes schema metadata and metrics array&lt;/li&gt;
&lt;li&gt;output order is deterministic&lt;/li&gt;
&lt;li&gt;golden-file tests exist for each format&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  GitHub Milestone Breakdown
&lt;/h2&gt;

&lt;p&gt;Work is tracked through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;M2-1 canonical metric model&lt;/li&gt;
&lt;li&gt;M2-2 collector-to-canonical mapping&lt;/li&gt;
&lt;li&gt;M2-3 JSON serializer&lt;/li&gt;
&lt;li&gt;M2-4 Prometheus serializer&lt;/li&gt;
&lt;li&gt;M2-5 OpenMetrics serializer&lt;/li&gt;
&lt;li&gt;M2-6 output format config + validation&lt;/li&gt;
&lt;li&gt;M2-7 fixture/contract tests&lt;/li&gt;
&lt;li&gt;M2-8 docs update&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Repo
&lt;/h2&gt;

</description>
      <category>telemetry</category>
      <category>opensource</category>
      <category>devops</category>
      <category>python</category>
    </item>
    <item>
      <title>Heka Insights Agent Update: Architecture + Configuration Docs Now Reflect Runtime Behavior</title>
      <dc:creator>Farhan Munir</dc:creator>
      <pubDate>Thu, 16 Apr 2026 05:37:17 +0000</pubDate>
      <link>https://dev.to/munirfarhan/heka-insights-agent-update-architecture-configuration-docs-now-reflect-runtime-behavior-2e8k</link>
      <guid>https://dev.to/munirfarhan/heka-insights-agent-update-architecture-configuration-docs-now-reflect-runtime-behavior-2e8k</guid>
      <description>&lt;h1&gt;
  
  
  Build Update (April 16, 2026)
&lt;/h1&gt;

&lt;p&gt;This week I focused on documentation quality and operational clarity for &lt;code&gt;heka-insights-agent&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The goal was simple: make docs match the code exactly, so contributors and operators can reason about behavior without reading every module first.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I updated
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Rewrote &lt;code&gt;docs/architecture.md&lt;/code&gt; from scratch&lt;/li&gt;
&lt;li&gt;Rewrote &lt;code&gt;docs/configuration.md&lt;/code&gt; from scratch&lt;/li&gt;
&lt;li&gt;Expanded &lt;code&gt;README.md&lt;/code&gt; with project context, setup, and environment guidance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture documentation improvements
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;docs/architecture.md&lt;/code&gt; now documents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the actual runtime topology and control loop in &lt;code&gt;src/main.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;collector boundaries and behavior (&lt;code&gt;CPUCollector&lt;/code&gt;, &lt;code&gt;MemoryCollector&lt;/code&gt;, &lt;code&gt;DiskCollector&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;logging subsystem behavior in &lt;code&gt;src/logger/config.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;current payload shapes emitted by collectors&lt;/li&gt;
&lt;li&gt;known gaps (no sender layer yet, no tests yet, no schema versioning yet)&lt;/li&gt;
&lt;li&gt;practical extension points for next phases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives a real “as-implemented” architecture baseline instead of aspirational text.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration documentation improvements
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;docs/configuration.md&lt;/code&gt; now includes exact behavior for the two active runtime settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;LOG_LOCATION&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CPU_POLL_INTERVAL_SECONDS&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It also documents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;source/precedence rules&lt;/li&gt;
&lt;li&gt;defaults and validation behavior&lt;/li&gt;
&lt;li&gt;failure modes&lt;/li&gt;
&lt;li&gt;local setup and production recommendations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Important behavior clarified
&lt;/h2&gt;

&lt;p&gt;Current config loading is split across two env files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;root &lt;code&gt;.env&lt;/code&gt; is used for &lt;code&gt;LOG_LOCATION&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/.env&lt;/code&gt; is used for &lt;code&gt;CPU_POLL_INTERVAL_SECONDS&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That split is now explicitly documented to reduce startup/debug confusion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;For an agent project, docs are part of reliability.&lt;br&gt;&lt;br&gt;
Operators need to know what can fail at startup, where config is read from, and what telemetry shape to expect downstream.&lt;/p&gt;

&lt;p&gt;This update makes onboarding and future refactors safer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;add transport/sender layer (backend adapters)&lt;/li&gt;
&lt;li&gt;add collector-focused tests&lt;/li&gt;
&lt;li&gt;consolidate config loading into a single source&lt;/li&gt;
&lt;li&gt;define schema/versioning strategy for emitted payloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/ronin1770/heka-insights-agent" rel="noopener noreferrer"&gt;https://github.com/ronin1770/heka-insights-agent&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>linux</category>
      <category>devops</category>
      <category>observability</category>
    </item>
    <item>
      <title>Reel Quick - Added Docker Support</title>
      <dc:creator>Farhan Munir</dc:creator>
      <pubDate>Wed, 15 Apr 2026 14:17:25 +0000</pubDate>
      <link>https://dev.to/munirfarhan/reel-quick-added-docker-support-np7</link>
      <guid>https://dev.to/munirfarhan/reel-quick-added-docker-support-np7</guid>
      <description>&lt;p&gt;Date: 2026-04-15&lt;br&gt;&lt;br&gt;
Project: Reel Quick (FastAPI + Next.js + ARQ + Mongo + Redis + optional GPU workers)&lt;/p&gt;

&lt;h2&gt;
  
  
  Context
&lt;/h2&gt;

&lt;p&gt;We containerized the stack and tried to run it in production mode with Docker Compose.&lt;br&gt;&lt;br&gt;
Initial startup failed for both frontend build and backend runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Issues Found
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Docker Compose path mismatches:&lt;/li&gt;
&lt;li&gt;Wrong &lt;code&gt;env_file&lt;/code&gt; paths (&lt;code&gt;docker/env/*.env&lt;/code&gt; expected but files were in &lt;code&gt;docker/&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Wrong nginx config mount path (&lt;code&gt;./docker/nginx/nginx.conf&lt;/code&gt; while actual file was &lt;code&gt;docker/nginx.conf&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build context was incorrect for a compose file located inside &lt;code&gt;docker/&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Frontend TypeScript build failure:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;location&lt;/code&gt; field type mismatch in &lt;code&gt;frontend/app/create_video/page.tsx&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Value inferred as &lt;code&gt;string | undefined&lt;/code&gt; but state expects &lt;code&gt;string | null&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Backend container crash on startup:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ModuleNotFoundError: No module named 'db'&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;backend/main.py&lt;/code&gt; used non-package imports like &lt;code&gt;from db import ...&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Root Causes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Relative paths in compose were not aligned with actual file layout.&lt;/li&gt;
&lt;li&gt;Optional API response property (&lt;code&gt;file_location?&lt;/code&gt;) was used directly inside state update.&lt;/li&gt;
&lt;li&gt;Backend entrypoint (&lt;code&gt;uvicorn backend.main:app&lt;/code&gt;) requires package-safe imports (&lt;code&gt;backend.*&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Fixes Applied
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Docker and Compose
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Updated &lt;code&gt;docker/docker-compose.yml&lt;/code&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;build.context&lt;/code&gt; changed from &lt;code&gt;.&lt;/code&gt; to &lt;code&gt;..&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;env_file&lt;/code&gt; paths corrected to &lt;code&gt;backend.env&lt;/code&gt; and &lt;code&gt;mongo.env&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;nginx bind mount fixed to &lt;code&gt;./nginx.conf&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Updated &lt;code&gt;docker/backend.env&lt;/code&gt;:

&lt;ul&gt;
&lt;li&gt;Added:&lt;/li&gt;
&lt;li&gt;&lt;code&gt;UPLOAD_FILES_LOCATION=/app/video_files&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;INPUT_FILES_LOCATION=/app/video_files&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Added repo-root &lt;code&gt;.dockerignore&lt;/code&gt; (Docker uses ignore file from build context root).&lt;/li&gt;

&lt;li&gt;Synced &lt;code&gt;docker/dockerignore&lt;/code&gt; entries.&lt;/li&gt;

&lt;li&gt;Updated &lt;code&gt;docker/README-docker-prod.md&lt;/code&gt; run commands.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Frontend
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fixed type narrowing in &lt;code&gt;frontend/app/create_video/page.tsx&lt;/code&gt;:

&lt;ul&gt;
&lt;li&gt;Captured &lt;code&gt;file_location&lt;/code&gt; into &lt;code&gt;uploadedLocation&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Guarded before &lt;code&gt;setFiles(...)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Used guaranteed string value in state update.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Backend
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Converted backend imports to package imports in &lt;code&gt;backend/main.py&lt;/code&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;from db import ...&lt;/code&gt; -&amp;gt; &lt;code&gt;from backend.db import ...&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Similar conversion for &lt;code&gt;logger&lt;/code&gt;, &lt;code&gt;models&lt;/code&gt;, &lt;code&gt;objects&lt;/code&gt;, &lt;code&gt;workers&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Updated &lt;code&gt;backend/objects/sound_prompt_preset.py&lt;/code&gt; import to &lt;code&gt;backend.objects...&lt;/code&gt;.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Commands Used for Deploy
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stop all running containers (host-wide)&lt;/span&gt;
docker ps &lt;span class="nt"&gt;-q&lt;/span&gt; | xargs &lt;span class="nt"&gt;-r&lt;/span&gt; docker stop

&lt;span class="c"&gt;# Start Reel Quick with GPU workers&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; /home/farhan/reel-quick/docker
docker compose &lt;span class="nt"&gt;--profile&lt;/span&gt; gpu up &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--build&lt;/span&gt;

&lt;span class="c"&gt;# Verify&lt;/span&gt;
docker compose ps
docker compose logs &lt;span class="nt"&gt;-f&lt;/span&gt; api &lt;span class="nt"&gt;--tail&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;200
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Validation Checklist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker compose ps&lt;/code&gt; shows &lt;code&gt;api&lt;/code&gt;, &lt;code&gt;frontend&lt;/code&gt;, &lt;code&gt;nginx&lt;/code&gt;, &lt;code&gt;mongo&lt;/code&gt;, &lt;code&gt;redis&lt;/code&gt;, workers as running.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;api&lt;/code&gt; logs no longer show &lt;code&gt;ModuleNotFoundError: No module named 'db'&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Frontend image builds successfully (&lt;code&gt;npm run build&lt;/code&gt; passes in container build stage).&lt;/li&gt;
&lt;li&gt;Upload endpoint works (&lt;code&gt;POST /uploads&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Workers/control panel endpoints return expected data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Keep compose file paths consistent with its directory and build context.&lt;/li&gt;
&lt;li&gt;Use package-qualified imports for Python app modules in containerized runtimes.&lt;/li&gt;
&lt;li&gt;Narrow optional API fields before state updates in strict TypeScript projects.&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;.dockerignore&lt;/code&gt; at the actual build context root to avoid bloated builds.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>python</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Build Log: Implementing Full Text Overlay Feature in Reel Quick (with Accurate Live Preview)</title>
      <dc:creator>Farhan Munir</dc:creator>
      <pubDate>Fri, 10 Apr 2026 05:46:36 +0000</pubDate>
      <link>https://dev.to/munirfarhan/build-log-implementing-full-text-overlay-feature-in-reel-quick-with-accurate-live-preview-4fcd</link>
      <guid>https://dev.to/munirfarhan/build-log-implementing-full-text-overlay-feature-in-reel-quick-with-accurate-live-preview-4fcd</guid>
      <description>&lt;h1&gt;
  
  
  Build Log: Implementing Full Text Overlay Feature in Reel Quick (with Accurate Live Preview)
&lt;/h1&gt;

&lt;p&gt;In this build, I implemented the complete text overlay workflow in &lt;strong&gt;Reel Quick&lt;/strong&gt;: from UI controls to background processing, plus a preview system that better matches final output.&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/ronin1770/reel-quick" rel="noopener noreferrer"&gt;https://github.com/ronin1770/reel-quick&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this feature mattered
&lt;/h2&gt;

&lt;p&gt;The earlier flow allowed adding overlay text, but styling control was limited and preview confidence was low.&lt;br&gt;&lt;br&gt;
Users needed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;control text appearance (size, color)&lt;/li&gt;
&lt;li&gt;control placement (top/center/bottom)&lt;/li&gt;
&lt;li&gt;preview changes instantly before processing&lt;/li&gt;
&lt;li&gt;avoid trial-and-error renders&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal was to make text overlays practical for real reel production, not just a placeholder UI.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the feature now includes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Overlay content + timing
&lt;/h3&gt;

&lt;p&gt;Users can create overlays with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;text&lt;/li&gt;
&lt;li&gt;start time&lt;/li&gt;
&lt;li&gt;end time&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Style controls
&lt;/h3&gt;

&lt;p&gt;Added styling inputs in the dialog:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Font size&lt;/strong&gt; range: &lt;code&gt;40–200&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text color&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;HTML5 color picker (&lt;code&gt;input type="color"&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;HEX input (synced with picker)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Position&lt;/strong&gt; selector:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;top&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;center&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bottom&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Live preview (client-side only)
&lt;/h3&gt;

&lt;p&gt;The overlay updates instantly while editing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;no backend call&lt;/li&gt;
&lt;li&gt;no queue call&lt;/li&gt;
&lt;li&gt;no video reprocessing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This lets users iterate quickly before clicking process.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Dialog UX improvements
&lt;/h3&gt;

&lt;p&gt;The modal was redesigned to be usable at production scale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;controls on the left&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;preview on the right&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;scrollable modal for smaller viewports&lt;/li&gt;
&lt;li&gt;action buttons always reachable&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The key technical challenge: preview/output mismatch
&lt;/h2&gt;

&lt;p&gt;A big issue was that selected preview size didn’t visually match rendered output.&lt;/p&gt;

&lt;h3&gt;
  
  
  Root cause
&lt;/h3&gt;

&lt;p&gt;Frontend preview text used raw CSS px in a display container, while backend renders text on actual output video resolution.&lt;br&gt;&lt;br&gt;
Same number, different render context.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fix strategy
&lt;/h3&gt;

&lt;p&gt;I changed preview scaling logic to account for real video dimensions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read intrinsic source dimensions from video metadata (&lt;code&gt;videoWidth&lt;/code&gt;, &lt;code&gt;videoHeight&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Measure actual preview frame dimensions in the modal&lt;/li&gt;
&lt;li&gt;Compute scale factor:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;scale = min(previewWidth/sourceWidth, previewHeight/sourceHeight)&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Render preview text as:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;previewFontSize = selectedFontSize * scale&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Preserve source aspect ratio in preview container&lt;/li&gt;
&lt;li&gt;Mirror vertical placement behavior (top/center/bottom with scaled edge padding)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Result: preview size and placement now feel much closer to final rendered video.&lt;/p&gt;




&lt;h2&gt;
  
  
  API + backend integration
&lt;/h2&gt;

&lt;p&gt;Good news: backend already supported style fields, so no backend API redesign was needed.&lt;/p&gt;

&lt;p&gt;The frontend sends per-overlay payload with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;style.font_size&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;style.text_color&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;position.preset&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then it follows the existing pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Save overlays
&lt;code&gt;POST /videos/{video_id}/text-overlays&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Enqueue processing
&lt;code&gt;POST /enqueue/text-overlay&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;ARQ worker picks job and runs MoviePy text overlay composition&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Data flow (end to end)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;User opens text overlay dialog&lt;/li&gt;
&lt;li&gt;Configures text + timing + style + position&lt;/li&gt;
&lt;li&gt;Verifies in live preview&lt;/li&gt;
&lt;li&gt;Saves overlay config&lt;/li&gt;
&lt;li&gt;Enqueues job&lt;/li&gt;
&lt;li&gt;Worker validates and renders final video&lt;/li&gt;
&lt;li&gt;Processed text-overlay video becomes available for download&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Validation and guardrails
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Font size is clamped to configured range&lt;/li&gt;
&lt;li&gt;HEX color is normalized/validated&lt;/li&gt;
&lt;li&gt;Overlay timing is validated before processing&lt;/li&gt;
&lt;li&gt;Position options are constrained to supported presets&lt;/li&gt;
&lt;li&gt;Preview updates remain instant and local&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What changed from earlier behavior
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Old frontend default style used a very small fixed size
&lt;/li&gt;
&lt;li&gt;New feature provides interactive style control + accurate preview scaling
&lt;/li&gt;
&lt;li&gt;Modal UX is now horizontal and production-friendly&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What’s next
&lt;/h2&gt;

&lt;p&gt;Potential follow-ups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;font family selection&lt;/li&gt;
&lt;li&gt;stroke/shadow controls in UI&lt;/li&gt;
&lt;li&gt;drag-and-drop custom placement&lt;/li&gt;
&lt;li&gt;multiple overlay tracks/timeline editing&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you want, I can also generate:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;a shorter Dev.to teaser version&lt;/li&gt;
&lt;li&gt;an accompanying screenshot checklist for the post&lt;/li&gt;
&lt;li&gt;a “before vs after” section with technical diff notes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvyp7cxlj3rpl08ea7ap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvyp7cxlj3rpl08ea7ap.png" alt=" " width="800" height="704"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4619l5msmkovj9ahxgxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4619l5msmkovj9ahxgxf.png" alt=" " width="800" height="723"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>nextjs</category>
      <category>fastapi</category>
      <category>moviepy</category>
    </item>
    <item>
      <title>Build Log: Shipping a Lean Python Telemetry Agent (CPU, Memory, Disk)</title>
      <dc:creator>Farhan Munir</dc:creator>
      <pubDate>Wed, 08 Apr 2026 09:29:51 +0000</pubDate>
      <link>https://dev.to/munirfarhan/build-log-shipping-a-lean-python-telemetry-agent-cpu-memory-disk-30j1</link>
      <guid>https://dev.to/munirfarhan/build-log-shipping-a-lean-python-telemetry-agent-cpu-memory-disk-30j1</guid>
      <description>&lt;h1&gt;
  
  
  Build Log (April 8, 2026)
&lt;/h1&gt;

&lt;p&gt;Today I implemented the first production-ready telemetry collectors for &lt;code&gt;heka-insights-agent&lt;/code&gt; and wired them into the main polling loop.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Added an optimized &lt;code&gt;CPUCollector&lt;/code&gt; in &lt;code&gt;src/collectors/cpu.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Added a &lt;code&gt;MemoryCollector&lt;/code&gt; in &lt;code&gt;src/collectors/memory.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Added a &lt;code&gt;DiskCollector&lt;/code&gt; in &lt;code&gt;src/collectors/disk.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Wired all collectors into &lt;code&gt;src/main.py&lt;/code&gt; with a shared loop&lt;/li&gt;
&lt;li&gt;Added environment-based poll interval support via &lt;code&gt;CPU_POLL_INTERVAL_SECONDS&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Added &lt;code&gt;python-dotenv&lt;/code&gt; in &lt;code&gt;requirements.txt&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  CPU collector design
&lt;/h2&gt;

&lt;p&gt;I built CPU collection around &lt;code&gt;psutil.cpu_times(...)&lt;/code&gt; snapshots and delta math (single source), instead of calling both &lt;code&gt;cpu_percent&lt;/code&gt; and &lt;code&gt;cpu_times_percent&lt;/code&gt; per cycle.&lt;/p&gt;

&lt;p&gt;Key design points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No thread offloading (&lt;code&gt;to_thread&lt;/code&gt;) for this workload&lt;/li&gt;
&lt;li&gt;First cycle is warm-up by design&lt;/li&gt;
&lt;li&gt;Supports &lt;code&gt;basic&lt;/code&gt; and &lt;code&gt;detailed&lt;/code&gt; output modes&lt;/li&gt;
&lt;li&gt;Optional per-core output&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;MonotonicTicker&lt;/code&gt; to keep fixed cadence without drift&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Memory collector design
&lt;/h2&gt;

&lt;p&gt;Memory collection is intentionally lightweight:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One call each to &lt;code&gt;psutil.virtual_memory()&lt;/code&gt; and &lt;code&gt;psutil.swap_memory()&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;basic&lt;/code&gt; mode returns compact key fields&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;detailed&lt;/code&gt; mode returns full psutil fields&lt;/li&gt;
&lt;li&gt;Raw byte values are preserved (server-side compute handles transformations)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Disk collector design
&lt;/h2&gt;

&lt;p&gt;For disk, I chose cumulative I/O counters (not rates) because central compute is done server-side.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses &lt;code&gt;psutil.disk_io_counters(perdisk=True)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Returns aggregate and per-disk counters&lt;/li&gt;
&lt;li&gt;Filters to physical devices only&lt;/li&gt;
&lt;li&gt;Excludes partitions from per-disk payload&lt;/li&gt;
&lt;li&gt;Added device-name cache with periodic refresh to reduce repeated filtering overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Main loop wiring
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;src/main.py&lt;/code&gt; now runs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU collector&lt;/li&gt;
&lt;li&gt;Memory collector&lt;/li&gt;
&lt;li&gt;Disk collector&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All on the same interval, with separate log lines per collector.&lt;/p&gt;

&lt;p&gt;Poll interval is loaded from &lt;code&gt;.env&lt;/code&gt; via:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CPU_POLL_INTERVAL_SECONDS&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Invalid values fall back safely to default &lt;code&gt;5.0s&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Profiling notes
&lt;/h2&gt;

&lt;p&gt;I profiled a 120-second run and reviewed both process stats and cProfile output.&lt;/p&gt;

&lt;p&gt;Key findings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agent CPU cost is very low (near-idle for this polling interval)&lt;/li&gt;
&lt;li&gt;Max RSS is about 15 MB&lt;/li&gt;
&lt;li&gt;Runtime is dominated by intentional sleep (expected)&lt;/li&gt;
&lt;li&gt;Collector costs are small; disk collection is the heaviest of the three&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What changed after profiling
&lt;/h2&gt;

&lt;p&gt;Based on profile output, I optimized disk collection further:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Added cached physical-device list to avoid filtering every cycle&lt;/li&gt;
&lt;li&gt;Kept output shape unchanged (&lt;code&gt;disk_io&lt;/code&gt; + &lt;code&gt;disk_io_perdisk&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Current status
&lt;/h2&gt;

&lt;p&gt;The agent now has a clean baseline telemetry pipeline with low overhead and clear extension points for transport/shipping.&lt;/p&gt;

&lt;p&gt;Next planned work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add payload shipping to backend endpoint&lt;/li&gt;
&lt;li&gt;Add bounded retry/backoff&lt;/li&gt;
&lt;li&gt;Add collector-focused tests&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Repo URL
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/ronin1770" rel="noopener noreferrer"&gt;
        ronin1770
      &lt;/a&gt; / &lt;a href="https://github.com/ronin1770/heka-insights-agent" rel="noopener noreferrer"&gt;
        heka-insights-agent
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A lightweight agent for collecting essential Linux system telemetry and shipping it to a configurable backend.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;heka-insights-agent&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;A lightweight agent for collecting essential Linux system telemetry and shipping it to a configurable backend.&lt;/p&gt;
&lt;p&gt;Test&lt;/p&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/ronin1770/heka-insights-agent" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


</description>
      <category>python</category>
      <category>monitoring</category>
      <category>linux</category>
      <category>devops</category>
    </item>
    <item>
      <title>Build Log: End-to-End Text Overlay Workflow for Video Processing</title>
      <dc:creator>Farhan Munir</dc:creator>
      <pubDate>Thu, 02 Apr 2026 16:04:23 +0000</pubDate>
      <link>https://dev.to/munirfarhan/build-log-end-to-end-text-overlay-workflow-for-video-processing-14ie</link>
      <guid>https://dev.to/munirfarhan/build-log-end-to-end-text-overlay-workflow-for-video-processing-14ie</guid>
      <description>&lt;h1&gt;
  
  
  Build Log: End-to-End Text Overlay Workflow for Video Processing
&lt;/h1&gt;

&lt;p&gt;Today I completed the core text overlay workflow across the frontend, backend, and worker pipeline. The main goal was to make text overlays behave like a real production feature instead of a partial UI action. That meant saving overlays, enqueueing processing jobs, running worker execution, tracking status, and making sure users can download the final processed file when rendering is complete.&lt;/p&gt;

&lt;h2&gt;
  
  
  What changed
&lt;/h2&gt;

&lt;p&gt;The biggest update was implementing the full text overlay processing lifecycle from the UI down to the background worker.&lt;/p&gt;

&lt;p&gt;A user can now create overlays for a video, save them through the API, enqueue a rendering job, and return to the videos page while processing continues in the background. Once the job is finished, the videos listing reflects that state and download actions point to the processed overlay output instead of the original file.&lt;/p&gt;

&lt;p&gt;I also finalized one important behavior change in overlay saving. Instead of merging new overlays with existing ones, each save call now replaces the stored overlays for that video. This prevents stale overlay entries from lingering and causing recurring overlap conflicts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend work completed
&lt;/h2&gt;

&lt;p&gt;On the backend, I added the API and queue flow required to support the feature end to end.&lt;/p&gt;

&lt;p&gt;The following routes are now in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;POST /videos/{video_id}/text-overlays&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;POST /enqueue/text-overlay&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GET /text-overlay-jobs&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GET /videos/{video_id}/text-overlays/download&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The save endpoint now stores the submitted overlays for a given video. The enqueue endpoint pushes a text overlay processing job into the queue and returns quickly, so the frontend does not need to wait on worker readiness before moving forward.&lt;/p&gt;

&lt;p&gt;To support job visibility, I introduced a &lt;code&gt;text_overlay_jobs&lt;/code&gt; collection for tracking processing records. This collection is now used for queued, pending, finished, and failure state tracking. I also added the required database initialization and an index on &lt;code&gt;text_overlay_jobs.video_id&lt;/code&gt; so lookup remains efficient.&lt;/p&gt;

&lt;p&gt;On the worker side, I added &lt;code&gt;process_text_overlay_job&lt;/code&gt; and wired a dedicated &lt;code&gt;text_overlay_worker&lt;/code&gt; into the queue system using &lt;code&gt;TEXT_OVERLAY_QUEUE_NAME&lt;/code&gt;. That worker is also registered in the control panel worker setup so it is managed consistently with the rest of the background processing stack.&lt;/p&gt;

&lt;p&gt;Another important backend fix was changing overlay persistence behavior from merge to replace. Earlier behavior allowed stale overlays to remain in the record, which could trigger overlap validation conflicts even after the user thought they had corrected the layout. Replacing overlays on save solved that problem cleanly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frontend updates
&lt;/h2&gt;

&lt;p&gt;On the frontend, the &lt;code&gt;CreateTextOverlayPage&lt;/code&gt; is now connected to the real processing flow.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Done / Process Video&lt;/strong&gt; action now performs the expected sequence:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Save overlays&lt;/li&gt;
&lt;li&gt;Enqueue the text overlay job&lt;/li&gt;
&lt;li&gt;Redirect the user back to &lt;code&gt;/videos&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sfzla57dmdsoghbijnl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sfzla57dmdsoghbijnl.png" alt=" " width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also added a loading and disabled state to prevent duplicate submissions while the request is in progress. This gives the action more predictable UX and reduces accidental repeat clicks.&lt;/p&gt;

&lt;p&gt;Error handling was improved as well. Instead of showing vague failures, the frontend now parses backend validation messages more clearly, including field paths where possible. That makes it much easier to debug malformed overlay payloads during testing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fowt8qtc79ur3fvsaopyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fowt8qtc79ur3fvsaopyw.png" alt=" " width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because some UI controls are still not finalized, I added temporary default values for fields that are not yet exposed in the interface. Current defaults are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;position: top/center&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;style: font size 16, weight normal&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These defaults let the feature move forward without blocking on the final design of overlay controls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Videos page behavior
&lt;/h2&gt;

&lt;p&gt;I also updated the &lt;code&gt;/videos&lt;/code&gt; listing page so the new workflow feels integrated instead of isolated.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Video ID&lt;/strong&gt; column was removed to simplify the table and give more room to user-facing actions.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Text Overlay&lt;/strong&gt; column now reflects job state more meaningfully. If the overlay job for a video is finished, the action shows &lt;strong&gt;Done&lt;/strong&gt;. Otherwise, it shows &lt;strong&gt;Create&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Download behavior was also updated. When an overlay job is finished successfully, the output and error download actions now point to the processed overlay video instead of the original file. If overlay processing has not completed, downloads continue to use the regular video file.&lt;/p&gt;

&lt;p&gt;That small change is important because it makes the final output feel connected to the workflow the user just triggered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the replace-on-save decision mattered
&lt;/h2&gt;

&lt;p&gt;One of the more important logic decisions today was changing overlay saving from merge semantics to replace semantics.&lt;/p&gt;

&lt;p&gt;Merging sounds convenient at first, but in practice it created stale state problems. Old overlays could remain attached to the video even after the user thought they were working from a clean set. That caused overlap validation issues to reappear and made the experience feel inconsistent.&lt;/p&gt;

&lt;p&gt;Replacing overlays on each save call makes the system much more predictable. The saved overlays now reflect exactly what the user submitted in the latest request, which is the safer behavior for this kind of editor flow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current workflow
&lt;/h2&gt;

&lt;p&gt;At this point, the text overlay feature supports the following flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User opens the text overlay page for a video&lt;/li&gt;
&lt;li&gt;User adds or edits overlays&lt;/li&gt;
&lt;li&gt;Frontend saves the overlays&lt;/li&gt;
&lt;li&gt;Frontend enqueues a processing job&lt;/li&gt;
&lt;li&gt;Worker renders the text overlay output in the background&lt;/li&gt;
&lt;li&gt;Job status is tracked in &lt;code&gt;text_overlay_jobs&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Videos page reflects completion state&lt;/li&gt;
&lt;li&gt;Download action serves the overlay-rendered file when available&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That means the feature is now functional across storage, processing, status tracking, and output retrieval.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is still temporary
&lt;/h2&gt;

&lt;p&gt;A few parts are still intentionally temporary or transitional.&lt;/p&gt;

&lt;p&gt;The overlay editor is still using fallback defaults for position and style where dedicated UI controls have not been built yet. The current implementation is enough to keep the workflow functional, but the page still needs the final interactive form controls for layout and style management.&lt;/p&gt;

&lt;p&gt;The videos page is also currently showing a simplified status signal. This works for now, but it could later become more expressive with states like queued, processing, failed, and finished.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final result
&lt;/h2&gt;

&lt;p&gt;This was a solid progress day because the feature moved from partial UI scaffolding into a real end-to-end system. The frontend now triggers meaningful backend work, the worker actually processes jobs, job state is persisted, and the final output can be downloaded through the app.&lt;/p&gt;

&lt;p&gt;Most importantly, the workflow now behaves in a predictable way. Saving overlays replaces prior state, queueing does not block on hard worker health checks, and completed processing is reflected directly in the videos list.&lt;/p&gt;

&lt;p&gt;The next step is to improve the overlay editor UI itself, but the underlying pipeline is now in place.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>python</category>
      <category>buildinpublic</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Build Log: Added the Text Overlay Page UI for Reel Quick</title>
      <dc:creator>Farhan Munir</dc:creator>
      <pubDate>Wed, 01 Apr 2026 10:06:27 +0000</pubDate>
      <link>https://dev.to/munirfarhan/build-log-added-the-text-overlay-page-ui-for-reel-quick-hdg</link>
      <guid>https://dev.to/munirfarhan/build-log-added-the-text-overlay-page-ui-for-reel-quick-hdg</guid>
      <description>&lt;h1&gt;
  
  
  Build Log: Added the Text Overlay Page UI for Reel Quick
&lt;/h1&gt;

&lt;p&gt;Today I worked on the frontend for &lt;strong&gt;Reel Quick&lt;/strong&gt; and added the first version of the &lt;strong&gt;Text Overlay page&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The goal was to create the page structure and user flow for adding text overlays to a video, without wiring up the full backend save/process flow yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I added
&lt;/h2&gt;

&lt;p&gt;I introduced a new page for creating text overlays:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Route: &lt;code&gt;/create-text-overlay/&amp;lt;video-id&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Added a per-video link from the &lt;code&gt;/videos&lt;/code&gt; page so each video now has a &lt;strong&gt;Create&lt;/strong&gt; action for text overlays&lt;/li&gt;
&lt;li&gt;Added a scaffolded frontend page that we can expand later into the full editor&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  UI decisions locked before implementation
&lt;/h2&gt;

&lt;p&gt;Before building, I confirmed a few product decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This phase is &lt;strong&gt;design only&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Video data should be fetched from the backend API&lt;/li&gt;
&lt;li&gt;“Done / Process Video” is &lt;strong&gt;UI-only for now&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Time display should use &lt;code&gt;mm:ss&lt;/code&gt; format like &lt;code&gt;00:00&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Users can &lt;strong&gt;add and delete&lt;/strong&gt; overlays for now&lt;/li&gt;
&lt;li&gt;No editing flow yet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj01ncekr1ft5t9yb5f0d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj01ncekr1ft5t9yb5f0d.png" alt=" " width="800" height="704"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That helped keep the implementation focused on layout and interaction, not backend behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the new page includes
&lt;/h2&gt;

&lt;p&gt;The new Text Overlay page now has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A header and basic video metadata area&lt;/li&gt;
&lt;li&gt;A left panel for video preview&lt;/li&gt;
&lt;li&gt;A right panel listing the overlays added so far&lt;/li&gt;
&lt;li&gt;A button to open an &lt;strong&gt;Add Text Overlay&lt;/strong&gt; dialog&lt;/li&gt;
&lt;li&gt;Delete action for each overlay item&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The video preview is designed to load using the backend video API, and completed videos can use the download endpoint for preview.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add Text Overlay dialog
&lt;/h2&gt;

&lt;p&gt;When the user clicks &lt;strong&gt;Add Text Overlay&lt;/strong&gt;, a dialog opens with three inputs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Text&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Start Time&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;End Time&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The timing controls are implemented as sliders:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start Time slider: from &lt;code&gt;0&lt;/code&gt; to &lt;code&gt;video length - 1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;End Time slider: from &lt;code&gt;start + 1&lt;/code&gt; to &lt;code&gt;video length&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This ensures one important rule stays true:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End time must always be greater than start time.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I also added state logic so the UI enforces that constraint automatically instead of depending only on validation later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time formatting
&lt;/h2&gt;

&lt;p&gt;Overlay timings are shown in &lt;code&gt;mm:ss&lt;/code&gt; format.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;00:00&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;00:12&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;01:45&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That keeps the interface easy to scan and closer to how users expect media timing to look.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current scope
&lt;/h2&gt;

&lt;p&gt;This is intentionally &lt;strong&gt;not connected to the full processing flow yet&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Right now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the page is focused on structure and interaction&lt;/li&gt;
&lt;li&gt;overlays can be added and deleted in the UI&lt;/li&gt;
&lt;li&gt;the “Process Video” action is only a placeholder for now&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That gives a usable frontend foundation before moving into API integration and processing workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Files touched
&lt;/h2&gt;

&lt;p&gt;Main changes were made in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;CreateTextOverlayPage.tsx&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;dynamic route page for &lt;code&gt;/create-text-overlay/&amp;lt;video-id&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;VideoList.tsx&lt;/code&gt; to add the link from the videos table&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Environment note
&lt;/h2&gt;

&lt;p&gt;I was not able to run frontend lint/build checks in this environment because &lt;code&gt;npm&lt;/code&gt; was not available:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm: command not found&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;So this step was completed as a UI implementation pass, but not fully validated through local frontend tooling in the current container.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters
&lt;/h2&gt;

&lt;p&gt;This is a small but useful step toward turning Reel Quick into a more complete video editing workflow.&lt;/p&gt;

&lt;p&gt;Instead of jumping straight into processing logic, I wanted to first lock down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;page flow&lt;/li&gt;
&lt;li&gt;interaction model&lt;/li&gt;
&lt;li&gt;timing constraints&lt;/li&gt;
&lt;li&gt;overlay creation experience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That makes the next step clearer: connect the UI to backend save and processing endpoints.&lt;/p&gt;

&lt;h2&gt;
  
  
  Repo
&lt;/h2&gt;

&lt;p&gt;Project repo: &lt;code&gt;https://github.com/ronin1770/reel-quick&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next step
&lt;/h2&gt;

&lt;p&gt;Next I’ll move from UI scaffolding into actual overlay persistence and processing integration so the page can do more than just model the workflow.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>nextjs</category>
      <category>frontend</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>I Added a Queued Video Text Overlay Workflow to Reel Quick</title>
      <dc:creator>Farhan Munir</dc:creator>
      <pubDate>Tue, 31 Mar 2026 07:32:18 +0000</pubDate>
      <link>https://dev.to/munirfarhan/i-added-a-queued-video-text-overlay-workflow-to-reel-quick-3d6h</link>
      <guid>https://dev.to/munirfarhan/i-added-a-queued-video-text-overlay-workflow-to-reel-quick-3d6h</guid>
      <description>&lt;p&gt;Project repo: &lt;a href="https://github.com/ronin1770/reel-quick" rel="noopener noreferrer"&gt;ronin1770/reel-quick&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recently worked on a backend improvement in &lt;a href="https://github.com/ronin1770/reel-quick" rel="noopener noreferrer"&gt;Reel Quick&lt;/a&gt; that solved a practical workflow problem around text overlays on videos.&lt;/p&gt;

&lt;p&gt;Instead of coupling overlay editing directly with processing, I moved the feature to a &lt;strong&gt;two-step queued workflow&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I changed
&lt;/h2&gt;

&lt;p&gt;I introduced two separate backend actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;POST /videos/{video_id}/text-overlays&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Saves or merges text overlay items for a video.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;POST /enqueue/text-overlay&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Queues processing only when the user is actually ready.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why I changed it
&lt;/h2&gt;

&lt;p&gt;The old approach risked a few common problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;processing could start before all overlays were finalized&lt;/li&gt;
&lt;li&gt;repeated edits could create duplicate overlay records&lt;/li&gt;
&lt;li&gt;there was no clean way to track queued, running, completed, or failed jobs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By separating &lt;strong&gt;save&lt;/strong&gt; from &lt;strong&gt;process&lt;/strong&gt;, the flow became more reliable for users and much easier to reason about on the backend.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key implementation decisions
&lt;/h2&gt;

&lt;p&gt;A few details mattered here:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;multiple overlays can be saved for a single video&lt;/li&gt;
&lt;li&gt;duplicate overlay saves are deduped by &lt;code&gt;overlay_id&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;enqueue accepts only &lt;code&gt;video_id&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;the backend resolves the input path from the stored video record&lt;/li&gt;
&lt;li&gt;processing requires the source video to already be &lt;code&gt;completed&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;a separate &lt;code&gt;text_overlay_jobs&lt;/code&gt; collection tracks async job state&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why this pattern works
&lt;/h2&gt;

&lt;p&gt;I like this design because it reflects how users actually work.&lt;/p&gt;

&lt;p&gt;They want to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;add and tweak overlays&lt;/li&gt;
&lt;li&gt;save changes while editing&lt;/li&gt;
&lt;li&gt;click process only when everything is ready&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That sounds simple, but it makes a big difference in reliability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Repo
&lt;/h2&gt;

&lt;p&gt;Project repo: &lt;a href="https://github.com/ronin1770/reel-quick" rel="noopener noreferrer"&gt;ronin1770/reel-quick&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;This was a useful reminder that not every feature should be handled in one request-response cycle.&lt;/p&gt;

&lt;p&gt;Sometimes the better design is to let users build state first, then explicitly enqueue background processing when they’re done.&lt;/p&gt;

</description>
      <category>python</category>
      <category>backend</category>
      <category>buildinpublic</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Built an Open Source Tool to Automate Instagram Reels — Looking for Contributors</title>
      <dc:creator>Farhan Munir</dc:creator>
      <pubDate>Sun, 29 Mar 2026 06:15:37 +0000</pubDate>
      <link>https://dev.to/munirfarhan/i-built-an-open-source-tool-to-automate-instagram-reels-looking-for-contributors-5ei5</link>
      <guid>https://dev.to/munirfarhan/i-built-an-open-source-tool-to-automate-instagram-reels-looking-for-contributors-5ei5</guid>
      <description>&lt;p&gt;🚀 Why I Started This Project&lt;/p&gt;

&lt;p&gt;Creating short-form content (especially Instagram Reels) is still surprisingly manual.&lt;/p&gt;

&lt;p&gt;Even with AI tools available, the workflow usually looks like this:&lt;/p&gt;

&lt;p&gt;Generate content ideas&lt;br&gt;
Write scripts manually&lt;br&gt;
Edit videos using multiple tools&lt;br&gt;
Stitch everything together&lt;br&gt;
Export and upload&lt;/p&gt;

&lt;p&gt;It’s fragmented, time-consuming, and error-prone.&lt;/p&gt;

&lt;p&gt;I wanted to simplify this.&lt;/p&gt;

&lt;p&gt;💡 What I’m Building: Reel Quick&lt;/p&gt;

&lt;p&gt;I started building Reel Quick, an open-source project aimed at automating parts of the Reels creation workflow.&lt;/p&gt;

&lt;p&gt;👉 Repo: &lt;a href="https://github.com/ronin1770/reel-quick" rel="noopener noreferrer"&gt;https://github.com/ronin1770/reel-quick&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The goal is simple:&lt;/p&gt;

&lt;p&gt;Build a system that helps developers and creators generate and process short-form video content programmatically.&lt;/p&gt;

&lt;p&gt;⚙️ What It Does (Current Direction)&lt;/p&gt;

&lt;p&gt;The project focuses on:&lt;/p&gt;

&lt;p&gt;Automating video generation workflows&lt;br&gt;
Integrating AI for content generation&lt;br&gt;
Simplifying pipelines for short-form media&lt;br&gt;
Making the system extensible for developers&lt;/p&gt;

&lt;p&gt;This is not just a tool — it’s meant to be a foundation that others can build on.&lt;/p&gt;

&lt;p&gt;🧱 Tech Direction&lt;/p&gt;

&lt;p&gt;Current and planned areas include:&lt;/p&gt;

&lt;p&gt;Python backend&lt;br&gt;
Video processing (FFmpeg / pipelines)&lt;br&gt;
API-based architecture&lt;br&gt;
AI/LLM integrations for content generation&lt;br&gt;
Automation workflows&lt;br&gt;
🧪 Why Open Source?&lt;/p&gt;

&lt;p&gt;Because this problem is bigger than a single implementation.&lt;/p&gt;

&lt;p&gt;There are many directions this can go:&lt;/p&gt;

&lt;p&gt;AI-generated reels&lt;br&gt;
Automated editing pipelines&lt;br&gt;
Content scheduling systems&lt;br&gt;
Creator tools&lt;/p&gt;

&lt;p&gt;Open source allows experimentation and faster iteration.&lt;/p&gt;

&lt;p&gt;👥 Looking for Contributors&lt;/p&gt;

&lt;p&gt;I’m looking for developers interested in:&lt;/p&gt;

&lt;p&gt;Python / backend systems&lt;br&gt;
AI / LLM integrations&lt;br&gt;
Video processing&lt;br&gt;
API design&lt;br&gt;
System architecture&lt;/p&gt;

&lt;p&gt;You can contribute by:&lt;/p&gt;

&lt;p&gt;Fixing bugs&lt;br&gt;
Suggesting improvements&lt;br&gt;
Building features&lt;br&gt;
Improving documentation&lt;br&gt;
📌 Where to Start&lt;/p&gt;

&lt;p&gt;👉 GitHub: &lt;a href="https://github.com/ronin1770/reel-quick" rel="noopener noreferrer"&gt;https://github.com/ronin1770/reel-quick&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check:&lt;/p&gt;

&lt;p&gt;Issues&lt;br&gt;
Roadmap (coming next)&lt;br&gt;
Contribution guidelines&lt;br&gt;
🔥 Final Thoughts&lt;/p&gt;

&lt;p&gt;This is an early-stage project, and that’s exactly why it’s a good time to get involved.&lt;/p&gt;

&lt;p&gt;If you enjoy building systems, experimenting with AI, or working on automation pipelines — this could be a fun project to collaborate on.&lt;/p&gt;

&lt;p&gt;Let’s build something useful together.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>opensource</category>
      <category>showdev</category>
      <category>tooling</category>
    </item>
  </channel>
</rss>
