<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohamad Ashraful Islam</title>
    <description>The latest articles on DEV Community by Mohamad Ashraful Islam (@ashraful).</description>
    <link>https://dev.to/ashraful</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ashraful"/>
    <language>en</language>
    <item>
      <title>AsyncIO Task Runner (Coro Runner): Simplifying Concurrent Python Task Management</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Fri, 05 Dec 2025 05:36:19 +0000</pubDate>
      <link>https://dev.to/ashraful/asyncio-task-runner-coro-runner-simplifying-concurrent-python-task-management-1ho3</link>
      <guid>https://dev.to/ashraful/asyncio-task-runner-coro-runner-simplifying-concurrent-python-task-management-1ho3</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Managing asynchronous tasks in Python can be complex, especially when you need fine-grained control over concurrency limits and task scheduling. Enter &lt;strong&gt;Coro Runner&lt;/strong&gt; (async-coro-runner), a lightweight Python utility that makes concurrent asynchronous task management straightforward and efficient.&lt;/p&gt;

&lt;p&gt;In this post, we'll explore what makes Coro Runner unique, how to use it, and why it might be the perfect solution for your async workload management needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Coro Runner?
&lt;/h2&gt;

&lt;p&gt;Coro Runner is a Python library built on top of Python's native &lt;code&gt;asyncio&lt;/code&gt; module that provides a simple yet powerful interface for managing concurrent asynchronous tasks. It's designed to simplify the execution of multiple async tasks with configurable concurrency limits, all within a single-threaded environment.&lt;/p&gt;

&lt;p&gt;The library is particularly useful when you need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Execute multiple async tasks concurrently with controlled parallelism&lt;/li&gt;
&lt;li&gt;Manage task queues with different priorities&lt;/li&gt;
&lt;li&gt;Avoid overwhelming resources with too many concurrent operations&lt;/li&gt;
&lt;li&gt;Schedule tasks dynamically from anywhere in your application&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Configurable Concurrency&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Define exactly how many tasks should run simultaneously, preventing resource exhaustion while maximizing throughput.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Simple API&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Add tasks from anywhere in your codebase with a straightforward interface that doesn't require deep asyncio knowledge.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Worker Queue System&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Multiple queues can be configured with their own priority levels, allowing sophisticated task management.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Built on asyncio&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Leverages Python's battle-tested asyncio module (introduced in Python 3.4), ensuring stability and compatibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;Getting started is as simple as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;coro-runner
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Quick Start Guide
&lt;/h2&gt;

&lt;p&gt;Here's a basic example to get you started:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;coro_runner&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CoroRunner&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize the runner with a concurrency limit of 10
&lt;/span&gt;&lt;span class="n"&gt;runner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CoroRunner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;concurrency&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Define an async task
&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Your async processing logic here
&lt;/span&gt;    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;some_async_operation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Processed &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;item_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# Add tasks from anywhere in your code
&lt;/span&gt;&lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;process_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;priority&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;high&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;process_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;priority&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;low&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;process_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;priority&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;high&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Your task function must be an async function (defined with &lt;code&gt;async def&lt;/code&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Use Case: FastAPI Integration
&lt;/h2&gt;

&lt;p&gt;One of the most powerful applications of Coro Runner is in web applications. Here's an example using FastAPI to handle background tasks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastAPI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;coro_runner&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CoroRunner&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize runner once at startup
&lt;/span&gt;&lt;span class="n"&gt;runner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CoroRunner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;concurrency&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;background_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Simulate some async work
&lt;/span&gt;    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Task &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; completed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/fire-task&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;trigger_tasks&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Schedule multiple tasks
&lt;/span&gt;    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;background_task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Scheduled &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;count&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; tasks&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;uvicorn&lt;/span&gt;
    &lt;span class="n"&gt;uvicorn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why Choose Coro Runner?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Compared to Raw asyncio
&lt;/h3&gt;

&lt;p&gt;While Python's asyncio is powerful, managing task concurrency often requires boilerplate code. Coro Runner abstracts this complexity:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without Coro Runner:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Managing concurrency manually with asyncio
&lt;/span&gt;&lt;span class="n"&gt;semaphore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Semaphore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;limited_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task_func&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;semaphore&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;task_func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Need to manage the semaphore for every task...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;With Coro Runner:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Simple and clean
&lt;/span&gt;&lt;span class="n"&gt;runner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CoroRunner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;concurrency&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;your_task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Declare Once, Use Everywhere
&lt;/h3&gt;

&lt;p&gt;A significant advantage is that you can initialize the runner once and call it from anywhere in your application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;coro_runner&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CoroRunner&lt;/span&gt;
&lt;span class="n"&gt;runner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CoroRunner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;concurrency&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# module_a.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;runner&lt;/span&gt;
&lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task_a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="c1"&gt;# module_b.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;runner&lt;/span&gt;
&lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task_b&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Advanced Features (Coming Soon)
&lt;/h2&gt;

&lt;p&gt;The Coro Runner roadmap includes exciting enhancements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring Tool Integration&lt;/strong&gt;: Real-time task monitoring and analytics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low-Level API&lt;/strong&gt;: Advanced features like callbacks, acknowledgments, and error handling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Robust Logging&lt;/strong&gt;: Detailed execution tracking for debugging and optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;Coro Runner shines in scenarios such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Web Scraping&lt;/strong&gt;: Fetch multiple URLs concurrently while respecting rate limits&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Processing&lt;/strong&gt;: Process large datasets with controlled parallelism&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Integration&lt;/strong&gt;: Call multiple external APIs without overwhelming them&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Background Jobs&lt;/strong&gt;: Queue and execute background tasks in web applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch Operations&lt;/strong&gt;: Perform bulk operations with controlled concurrency&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Example: Controlled Web Scraping
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;aiohttp&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;coro_runner&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CoroRunner&lt;/span&gt;

&lt;span class="n"&gt;runner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CoroRunner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;concurrency&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Only 5 concurrent requests
&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_url&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;aiohttp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ClientSession&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Schedule 100 URLs but only 5 will run at once
&lt;/span&gt;&lt;span class="n"&gt;urls&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.example.com/data/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;urls&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fetch_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Development and Contribution
&lt;/h2&gt;

&lt;p&gt;Coro Runner is actively developed and welcomes contributions. The project uses Uv for dependency management and includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Comprehensive test suite with pytest&lt;/li&gt;
&lt;li&gt;GitHub Actions for CI/CD&lt;/li&gt;
&lt;li&gt;Docker support for isolated testing&lt;/li&gt;
&lt;li&gt;Example applications demonstrating real-world usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To contribute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone the repository&lt;/span&gt;
git clone https://github.com/iashraful/async-coro-runner.git
&lt;span class="nb"&gt;cd &lt;/span&gt;async-coro-runner

&lt;span class="c"&gt;# Install dependencies&lt;/span&gt;
uv &lt;span class="nb"&gt;sync&lt;/span&gt;

&lt;span class="c"&gt;# Run tests&lt;/span&gt;
uv run pytest &lt;span class="nt"&gt;-s&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Project Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Repository&lt;/strong&gt;: &lt;a href="https://github.com/iashraful/async-coro-runner" rel="noopener noreferrer"&gt;iashraful/async-coro-runner&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PyPI Package&lt;/strong&gt;: &lt;a href="https://pypi.org/project/coro-runner/" rel="noopener noreferrer"&gt;coro-runner&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: &lt;a href="https://github.com/iashraful/async-coro-runner/tree/main/coro_runner/docs/docs.md" rel="noopener noreferrer"&gt;Full Documentation&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Author&lt;/strong&gt;: &lt;a href="https://ashraful.dev" rel="noopener noreferrer"&gt;Ashraful Islam&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.12 or later&lt;/li&gt;
&lt;li&gt;uv (for development)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Coro Runner provides a clean, simple abstraction over asyncio's task management, making it easier to build high-performance concurrent applications in Python. Whether you're building web scrapers, API integrations, or background job processors, Coro Runner helps you manage concurrency without the complexity.&lt;/p&gt;

&lt;p&gt;The library's philosophy is clear: provide powerful concurrency control with the simplest possible API. With its growing feature set and active development, Coro Runner is positioned to become an essential tool for Python developers working with asynchronous code.&lt;/p&gt;

&lt;p&gt;Give it a try in your next project, and experience the simplicity of managed async task execution!&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Help
&lt;/h2&gt;

&lt;p&gt;Have questions or running into issues? Here's how to get help:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Issues&lt;/strong&gt;: &lt;a href="https://github.com/iashraful/async-coro-runner/issues" rel="noopener noreferrer"&gt;GitHub Issues&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discussions&lt;/strong&gt;: Engage with the community on GitHub&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: Check the &lt;a href="https://github.com/iashraful/async-coro-runner/tree/main/coro_runner/docs" rel="noopener noreferrer"&gt;docs folder&lt;/a&gt; for detailed guides&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Stay Updated
&lt;/h2&gt;

&lt;p&gt;⭐ Star the repository on GitHub to stay updated with the latest features and improvements!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you used Coro Runner in your projects? Share your experience in the comments below!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>python</category>
      <category>asyncio</category>
      <category>programming</category>
    </item>
    <item>
      <title>Building a Rock-Solid Home Server Backup Strategy - My 3-2-1 Approach</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Tue, 05 Aug 2025 17:24:55 +0000</pubDate>
      <link>https://dev.to/ashraful/building-a-rock-solid-home-server-backup-strategy-my-3-2-1-approach-bd7</link>
      <guid>https://dev.to/ashraful/building-a-rock-solid-home-server-backup-strategy-my-3-2-1-approach-bd7</guid>
      <description>&lt;p&gt;When you’re running a home server that stores your personal files, photos, media, and even services like Pi-hole or Jellyfin, &lt;strong&gt;a proper backup strategy isn’t optional—it’s essential&lt;/strong&gt;. Data loss can happen due to hardware failure, accidental deletion, or even corruption. That’s why I follow a proven and practical method known as the &lt;strong&gt;3-2-1 backup strategy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this blog post, I’ll walk you through how I’ve implemented this strategy using a combination of &lt;strong&gt;external SSD storage&lt;/strong&gt; and &lt;strong&gt;network-based backups over SSH&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Originally Posted on &lt;a href="https://blog.ashraful.dev/series/home-server/3-321-backup-strategy-of-home-server.html" rel="noopener noreferrer"&gt;Ashraful's Blog&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What is the 3-2-1 Backup Strategy?
&lt;/h2&gt;

&lt;p&gt;The 3-2-1 rule is a simple yet powerful backup principle:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;3&lt;/strong&gt; copies of your data (1 primary + 2 backups)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2&lt;/strong&gt; different types of media (e.g., SSD, HDD, cloud, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1&lt;/strong&gt; copy stored offsite&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This strategy reduces the risk of total data loss by spreading your backups across multiple locations and types of storage.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Primary Storage: My Home Server
&lt;/h2&gt;

&lt;p&gt;My home server is the heart of my digital setup. It hosts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A media server (Jellyfin)&lt;/li&gt;
&lt;li&gt;Personal files and documents&lt;/li&gt;
&lt;li&gt;Photos and videos (via Immich)&lt;/li&gt;
&lt;li&gt;Various self-hosted services (like Pi-hole, Portainer, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where the &lt;strong&gt;primary copy&lt;/strong&gt; of all my data lives.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Local Backup: External SSD with Rsync Script
&lt;/h2&gt;

&lt;p&gt;To maintain a local backup, I’ve connected an &lt;strong&gt;external SSD&lt;/strong&gt; directly to my home server.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The SSD is mounted and automatically recognized on boot.&lt;/li&gt;
&lt;li&gt;I back up &lt;strong&gt;everything important&lt;/strong&gt; to this drive using a Bash script powered by &lt;code&gt;rsync&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;A cron job or systemd timer can automate the execution of this script regularly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s a sample of the SSD backup script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nv"&gt;SOURCE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/data"&lt;/span&gt;
&lt;span class="nv"&gt;DEST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/mnt/backup_ssd"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Starting backup to external SSD..."&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
rsync &lt;span class="nt"&gt;-azP&lt;/span&gt; &lt;span class="nt"&gt;--delete&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SOURCE&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEST&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Backup to SSD completed."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Here is the actual script of mine. &lt;a href="https://github.com/iashraful/backup-on-disk" rel="noopener noreferrer"&gt;Github Repo&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This makes sure the SSD always has the latest version of my data and also removes deleted files for consistency.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Network Backup: Remote Sync Over SSH with Variables
&lt;/h2&gt;

&lt;p&gt;To fulfill the &lt;strong&gt;offsite&lt;/strong&gt; part of the 3-2-1 strategy, I’ve also implemented a &lt;strong&gt;remote backup over my local network&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here’s how it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I use a custom Bash script that uses &lt;code&gt;rsync&lt;/code&gt; over SSH.&lt;/li&gt;
&lt;li&gt;It pulls variables like source, destination, remote user, and host from the script, making it easy to reuse or tweak.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s a sample of the SSH-based backup script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nv"&gt;SOURCE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/data"&lt;/span&gt;
&lt;span class="nv"&gt;REMOTE_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"user"&lt;/span&gt;
&lt;span class="nv"&gt;REMOTE_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"192.168.1.10"&lt;/span&gt;
&lt;span class="nv"&gt;REMOTE_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/mnt/backup/data"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Starting remote network backup..."&lt;/span&gt;
rsync &lt;span class="nt"&gt;-azP&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; ssh &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SOURCE&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REMOTE_USER&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$REMOTE_HOST&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$REMOTE_PATH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Remote backup completed."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures I’m keeping a secondary backup in a different physical location, even if it's just across the network.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Setup Works for Me
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Redundancy&lt;/strong&gt;: With multiple local and remote backups, I have peace of mind even if one disk fails.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt;: The SSD makes for quick restores, while SSH syncs avoid re-copying everything unnecessarily.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privacy&lt;/strong&gt;: Everything stays in my own network—no cloud involved.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Implementing a backup strategy might seem tedious at first, but trust me—it’s worth every bit of effort. Data loss can be painful and costly, especially when it comes to irreplaceable personal content.&lt;/p&gt;

&lt;p&gt;By following the &lt;strong&gt;3-2-1 strategy&lt;/strong&gt; and using a mix of &lt;strong&gt;external SSDs&lt;/strong&gt; and &lt;strong&gt;network-based backups&lt;/strong&gt;, I’ve built a system that’s simple, reliable, and tailored to my setup. If you’re running a home server, I highly recommend investing time in your own backup plan—your future self will thank you.&lt;/p&gt;

</description>
      <category>backup</category>
      <category>server</category>
      <category>ubuntu</category>
      <category>devops</category>
    </item>
    <item>
      <title>Setting Up Pi-hole in Docker with Proper DNS Configuration</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Tue, 05 Aug 2025 05:55:28 +0000</pubDate>
      <link>https://dev.to/ashraful/setting-up-pi-hole-in-docker-with-proper-dns-configuration-2hhb</link>
      <guid>https://dev.to/ashraful/setting-up-pi-hole-in-docker-with-proper-dns-configuration-2hhb</guid>
      <description>&lt;p&gt;&lt;strong&gt;Originally Posted on &lt;a href="https://blog.ashraful.dev/series/home-server/1-install-pihol-your-network-guardian.html" rel="noopener noreferrer"&gt;Ashraful's Blog&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  This guide walks through
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Installing Pi-hole with Docker Compose
&lt;/li&gt;
&lt;li&gt;Ensuring port 53 is available
&lt;/li&gt;
&lt;li&gt;Disabling the system's DNS resolver if necessary
&lt;/li&gt;
&lt;li&gt;Configuring your router to use Pi-hole
&lt;/li&gt;
&lt;li&gt;Setting up local DNS entries (if you need)&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  📦 Step 1: Install Pi-hole Using Docker Compose
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;docker-compose.yml&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3"&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pihole&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pihole&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pihole/pihole:latest&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;53:53/tcp"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;53:53/udp"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8080:80"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8443:443"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;TZ&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Asia/Dhaka"&lt;/span&gt;
      &lt;span class="na"&gt;WEBPASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;changeme"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./etc-pihole/:/etc/pihole/"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./etc-dnsmasq.d/:/etc/dnsmasq.d/"&lt;/span&gt;
    &lt;span class="na"&gt;dns&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;1.1.1.1&lt;/span&gt;
    &lt;span class="na"&gt;cap_add&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;NET_ADMIN&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🚪 Step 2: Make Sure Port 53 Is Available
&lt;/h2&gt;

&lt;p&gt;Port 53 is critical for DNS. Run this to check if it's already in use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;lsof &lt;span class="nt"&gt;-i&lt;/span&gt; :53
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see something like &lt;code&gt;systemd-resolved&lt;/code&gt; or &lt;code&gt;named&lt;/code&gt;, you need to free that port.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛑 Step 3: Disable the System DNS Resolver (If Needed)
&lt;/h2&gt;

&lt;p&gt;On most Linux distros, &lt;code&gt;systemd-resolved&lt;/code&gt; binds to port 53 by default.&lt;/p&gt;

&lt;p&gt;To disable it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl disable &lt;span class="nt"&gt;--now&lt;/span&gt; systemd-resolved
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also replace the symlink for &lt;code&gt;/etc/resolv.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo rm&lt;/span&gt; /etc/resolv.conf
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"nameserver 127.0.0.1"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(You can also add a fallback like &lt;code&gt;8.8.8.8&lt;/code&gt; if needed.)&lt;/p&gt;




&lt;h2&gt;
  
  
  🌐 Step 4: Configure Your Router to Use Pi-hole
&lt;/h2&gt;

&lt;p&gt;To apply ad blocking to your whole network:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log into your router admin page
&lt;/li&gt;
&lt;li&gt;Find the &lt;strong&gt;DHCP/DNS settings&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Set &lt;strong&gt;Primary DNS&lt;/strong&gt; to the IP of your Pi-hole server (e.g., &lt;code&gt;192.168.1.10&lt;/code&gt;)
&lt;/li&gt;
&lt;li&gt;Remove or override any secondary DNS that bypasses Pi-hole (like &lt;code&gt;8.8.8.8&lt;/code&gt;)
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now all devices will query Pi-hole for DNS.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧭 Step 5: Configure Local DNS (And Why)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ✅ Why Configure Local DNS?
&lt;/h3&gt;

&lt;p&gt;Setting up local DNS allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access devices by name (&lt;code&gt;nas.local&lt;/code&gt;, &lt;code&gt;printer.lan&lt;/code&gt;, etc.)&lt;/li&gt;
&lt;li&gt;Avoid typing IP addresses manually&lt;/li&gt;
&lt;li&gt;Make services on your home server feel more like the cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔧 How to Do It
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;strong&gt;Pi-hole Admin Panel → Local DNS → DNS Records&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Add entries like:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;movie.ashraful.dev → 192.168.10.10
nas.lan → 192.168.10.20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Test from any device:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ping movie.ashraful.dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If it resolves, you're all set!&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pi-hole in Docker is clean and powerful — just make sure port 53 is free.&lt;/li&gt;
&lt;li&gt;Disable &lt;code&gt;systemd-resolved&lt;/code&gt; if it blocks port 53.&lt;/li&gt;
&lt;li&gt;Set your router’s DNS to point to Pi-hole to enable network-wide filtering.&lt;/li&gt;
&lt;li&gt;Use local DNS to make your network smarter and easier to use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enjoy ad-free browsing across your entire network! 🧠🛡️&lt;/p&gt;

</description>
      <category>pihole</category>
      <category>dns</category>
      <category>ubuntu</category>
      <category>linux</category>
    </item>
    <item>
      <title>Fixing DNS Resolution After Disabling systemd-resolved for Pi-hole</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Tue, 05 Aug 2025 04:49:15 +0000</pubDate>
      <link>https://dev.to/ashraful/fixing-dns-resolution-after-disabling-systemd-resolved-for-pi-hole-4df3</link>
      <guid>https://dev.to/ashraful/fixing-dns-resolution-after-disabling-systemd-resolved-for-pi-hole-4df3</guid>
      <description>&lt;p&gt;However, after doing this, I ran into a strange issue: some Python programs (specifically those using &lt;code&gt;dnspython&lt;/code&gt;) could no longer resolve domain names. Here's how I diagnosed and fixed the problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Originally Posted on &lt;a href="https://blog.ashraful.dev/posts/home-server/dns-resolving-at-home-server.html" rel="noopener noreferrer"&gt;Ashraful's Blog&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🔍 The Problem
&lt;/h2&gt;

&lt;p&gt;To route DNS through Pi-hole, I disabled &lt;code&gt;systemd-resolved&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl disable &lt;span class="nt"&gt;--now&lt;/span&gt; systemd-resolved
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My home server then used Pi-hole's IP (e.g., &lt;code&gt;192.168.10.10&lt;/code&gt;) as its DNS server. Most things worked fine — &lt;code&gt;ping&lt;/code&gt;, &lt;code&gt;curl&lt;/code&gt;, and &lt;code&gt;dig&lt;/code&gt; had no issues.&lt;/p&gt;

&lt;p&gt;But some Python code using &lt;code&gt;dnspython&lt;/code&gt; threw an error like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dns.resolver.NoNameservers: All nameservers failed to answer the query
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or even:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FileNotFoundError: [Errno 2] No such file or directory: '/etc/resolv.conf'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧠 What’s Going On?
&lt;/h2&gt;

&lt;p&gt;Many programs — including &lt;code&gt;dnspython&lt;/code&gt; — read DNS server information directly from &lt;code&gt;/etc/resolv.conf&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When &lt;code&gt;systemd-resolved&lt;/code&gt; is active, &lt;code&gt;/etc/resolv.conf&lt;/code&gt; is often a &lt;strong&gt;symlink&lt;/strong&gt; to a dynamically generated file like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/etc/resolv.conf -&amp;gt; ../run/systemd/resolve/stub-resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But once &lt;code&gt;systemd-resolved&lt;/code&gt; is disabled, this symlink points to a non-existent file. Programs depending on it will fail to resolve any domains.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ The Fix
&lt;/h2&gt;

&lt;p&gt;We need to &lt;strong&gt;replace the broken symlink&lt;/strong&gt; with a static &lt;code&gt;resolv.conf&lt;/code&gt; file that directly specifies our Pi-hole DNS.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Remove the broken symlink
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo rm&lt;/span&gt; /etc/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Create a new static file
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nano /etc/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste your Pi-hole DNS IP (replace with your actual IP):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nameserver 192.168.10.10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Optionally, add a fallback:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nameserver 192.168.10.10
nameserver 8.8.8.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save and close the file.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. (Optional) Prevent overwrites
&lt;/h3&gt;

&lt;p&gt;To ensure no other service modifies it (e.g., NetworkManager):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;chattr +i /etc/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To remove the protection later:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;chattr &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧪 Test DNS Resolution
&lt;/h2&gt;

&lt;p&gt;Using &lt;code&gt;nslookup&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nslookup google.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;dns.resolver&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;dns&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;resolver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;google.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything is working, you'll get a valid IP response.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧵 Summary
&lt;/h2&gt;

&lt;p&gt;Because I'm using &lt;strong&gt;Pi-hole as my network guardian&lt;/strong&gt;, I wanted to disable any system-level DNS services that could bypass it. Disabling &lt;code&gt;systemd-resolved&lt;/code&gt; was necessary — but it broke &lt;code&gt;/etc/resolv.conf&lt;/code&gt;, which some programs still depend on.&lt;/p&gt;

&lt;p&gt;By creating a &lt;strong&gt;manual &lt;code&gt;resolv.conf&lt;/code&gt;&lt;/strong&gt; that points to Pi-hole, I ensured full DNS functionality while keeping all traffic filtered and protected.&lt;/p&gt;




&lt;p&gt;Happy tinkering! 🛠️🧠🔒&lt;/p&gt;

</description>
      <category>pihole</category>
      <category>dns</category>
      <category>ubuntu</category>
      <category>linux</category>
    </item>
    <item>
      <title>The Untold Story of My Home Server - A Journey Through Self-Hosted Services</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Sun, 03 Aug 2025 05:17:15 +0000</pubDate>
      <link>https://dev.to/ashraful/the-untold-story-of-my-home-server-a-journey-through-self-hosted-services-2hj2</link>
      <guid>https://dev.to/ashraful/the-untold-story-of-my-home-server-a-journey-through-self-hosted-services-2hj2</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdz7uowsnvrzuwakgxv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdz7uowsnvrzuwakgxv4.png" alt="Self Hosted Services" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Originally Posted on &lt;a href="https://blog.ashraful.dev/posts/history-of-a-home-server.html" rel="noopener noreferrer"&gt;Ashraful's Blog&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a Home Server?
&lt;/h2&gt;

&lt;p&gt;Before diving into the services, let’s talk about why I set up a home server. Cloud services like Netflix, Google Drive, or Evernote are convenient, but they come with recurring costs, privacy concerns, and dependency on someone else’s infrastructure. A home server lets me host my own media, back up my photos, manage my network, and even run custom apps—all on my terms. My setup runs on a (mini pc bought from Amazon) Ubuntu machine with Docker, making it easy to manage and scale. Here’s a look at the services that make it tick.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Immich: My Photo and Video Sanctuary
&lt;/h2&gt;

&lt;p&gt;Immich is my go-to for backing up and managing photos and videos. Think Google Photos, but self-hosted, private, and free.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why I Chose It&lt;/strong&gt;: Immich offers automatic backups, face recognition, and a clean mobile app, all without sending my personal memories to the cloud. It’s perfect for keeping my family’s photos secure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How It Works&lt;/strong&gt;: I installed Immich via Docker, and it syncs photos and videos from my phone automatically over Wi-Fi. It organizes them by date, location, or people (using AI-based tagging), and I can access them from any device via a web interface or app.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Experience&lt;/strong&gt;: Immich has saved me from losing precious memories—like my kid’s first birthday video—while keeping them off Big Tech’s servers. The setup took some tweaking to get the AI features running smoothly, but it’s been worth it.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  2. Jellyfin: My Personal Netflix
&lt;/h2&gt;

&lt;p&gt;Jellyfin is the crown jewel of my home server, turning it into a media streaming powerhouse. This open-source media server lets me stream my collection of movies, TV shows, music, and even audiobooks to any device—my TV, phone, or laptop—without a subscription.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why I Chose It&lt;/strong&gt;: Unlike Plex, which has a freemium model with paid features, Jellyfin is 100% free and open-source. It gives me full control over my media library, with no cloud dependency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How It Works&lt;/strong&gt;: I store my media files (MP4s, MP3s, etc.) on the server’s hard drive. Jellyfin organizes them with metadata, album art, and subtitles, creating a sleek interface that rivals commercial streaming platforms. I access it via a web browser or Jellyfin’s apps on my devices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Experience&lt;/strong&gt;: Setting up Jellyfin was a breeze with Docker. I love binge-watching my favorite shows on my Roku, knowing it’s all hosted at home. Plus, I can share my library with family without worrying about data leaks.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  3. Cloudreve: My Personal Cloud Storage
&lt;/h2&gt;

&lt;p&gt;Cloudreve is my Google Drive alternative, offering a self-hosted cloud storage solution for files, documents, and more.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why I Chose It&lt;/strong&gt;: Cloudreve is lightweight, supports multiple storage backends (like local drives or S3), and has a user-friendly interface. It’s perfect for accessing files from anywhere without relying on third-party cloud providers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How It Works&lt;/strong&gt;: I set up Cloudreve to store files on my server’s hard drive. It provides a web interface and WebDAV support, so I can upload, download, or share files securely. I also use it to back up important documents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Experience&lt;/strong&gt;: Cloudreve has been a game-changer for sharing large files with friends or accessing work documents remotely. The setup was straightforward, though I had to configure SSL for secure access.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  4. Pi-Hole: The Ad-Blocking Network Guardian
&lt;/h2&gt;

&lt;p&gt;Pi-Hole is my network-wide ad and tracker blocker, making my internet experience faster and cleaner.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why I Chose It&lt;/strong&gt;: Pi-Hole blocks ads at the DNS level, meaning no pop-ups or video ads on any device connected to my network. It’s also a great way to monitor network activity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How It Works&lt;/strong&gt;: Running in a Docker container, Pi-Hole acts as my network’s DNS server. It filters out requests to known ad and tracker domains, replacing them with nothing. I pointed my router’s DNS settings to Pi-Hole, and it works seamlessly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Experience&lt;/strong&gt;: The difference is night and day—web pages load faster, and YouTube is ad-free on my smart TV. The dashboard showing blocked domains is oddly satisfying to check.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  5. Portainer: Keeping My Containers in Check
&lt;/h2&gt;

&lt;p&gt;Portainer is my go-to tool for managing Docker containers, making it easy to monitor and control all the services on my server.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why I Chose It&lt;/strong&gt;: Portainer’s web-based interface simplifies Docker management, especially for someone like me who prefers a GUI over command-line tinkering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How It Works&lt;/strong&gt;: Portainer connects to my Docker environment and lets me start, stop, or update containers, check logs, and monitor resource usage. It’s like a control panel for my server’s services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Experience&lt;/strong&gt;: Portainer has saved me countless hours of debugging. I can see at a glance if Jellyfin or Immich is acting up and restart containers with a click.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  6. Nginx Proxy Manager: Simplifying Reverse Proxies
&lt;/h2&gt;

&lt;p&gt;Nginx Proxy Manager makes it easy to manage reverse proxies, allowing secure access to my services with custom domains and SSL certificates.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why I Chose It&lt;/strong&gt;: Setting up reverse proxies manually with Nginx configs is a hassle. Nginx Proxy Manager’s graphical interface lets me configure proxies and SSL (via Let’s Encrypt) in minutes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How It Works&lt;/strong&gt;: I use it to route traffic to my services (e.g., &lt;code&gt;jellyfin.mydomain.com&lt;/code&gt; to Jellyfin’s port). It handles SSL termination, ensuring secure connections.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Experience&lt;/strong&gt;: This was a lifesaver for accessing my services remotely without exposing ports to the internet. The setup was intuitive, and I love the clean dashboard.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  7. Cockpit: Remote Management Made Easy
&lt;/h2&gt;

&lt;p&gt;Cockpit is my tool for remotely managing my Ubuntu server, giving me a web-based interface to monitor and tweak system settings.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why I Chose It&lt;/strong&gt;: Cockpit is lightweight and integrates well with Ubuntu, offering a one-stop shop for system stats, updates, and user management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How It Works&lt;/strong&gt;: Running in a Docker container, Cockpit provides a dashboard to check CPU, memory, disk usage, and logs. I can also manage services or reboot the server remotely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Experience&lt;/strong&gt;: Cockpit is my safety net when I’m away from home. I once fixed a stuck service while on vacation, all from my phone’s browser.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  8. My Custom Note and Expense Tracking System: A Personal Touch
&lt;/h2&gt;

&lt;p&gt;The final piece of my home server puzzle is a note and expense tracking system I built myself, tailored to my needs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Why I Built It&lt;/strong&gt;: I wanted a simple, private way to track daily notes and expenses without relying on apps like Notion or Mint, which store data in the cloud.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How It Works&lt;/strong&gt;: Written in [insert language, e.g., Python with Flask or Node.js], it’s a web app running in a Docker container. It has a minimalist interface for adding notes (with tags and search) and logging expenses (with categories and monthly summaries). Data is stored in a local SQLite database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;My Experience&lt;/strong&gt;: Building this was a labor of love. It’s not perfect, but it’s mine. I use it daily to jot down ideas or track spending, and it’s satisfying to know it’s fully under my control.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Bigger Picture: Why This Matters
&lt;/h2&gt;

&lt;p&gt;Running these services on my home server has been transformative. Jellyfin and Immich keep my media and memories safe and accessible. Cloudreve and my custom app handle my files and personal data without third-party snooping. Pi-Hole, Portainer, Nginx Proxy Manager, and Cockpit keep everything running smoothly and securely. Together, they’ve saved me money, boosted my privacy, and taught me valuable skills.&lt;/p&gt;

&lt;p&gt;But beyond the tech, this setup is about empowerment. It’s about saying “no” to walled gardens and “yes” to owning your digital life. My server isn’t just a machine—it’s a statement that I can build, manage, and control my own corner of the internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Your Own Home Server
&lt;/h2&gt;

&lt;p&gt;Inspired to start your own home server? Here’s a quick roadmap:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hardware&lt;/strong&gt;: Start small with a Raspberry Pi, an old PC, or a dedicated NAS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OS&lt;/strong&gt;: Use Ubuntu Server or a NAS-focused OS like TrueNAS for ease of use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt;: Learn Docker to simplify service deployment (Portainer makes this easier).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Services&lt;/strong&gt;: Pick one or two services (like Jellyfin or Pi-Hole) and expand from there.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community&lt;/strong&gt;: Join communities like r/homelab or r/selfhosted on Reddit, or follow home server discussions on X for tips and inspiration.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Conclusion: The Heart of My Digital Home
&lt;/h2&gt;

&lt;p&gt;My home server is more than a collection of services; it’s a reflection of my desire for independence, creativity, and control. From streaming movies with Jellyfin to tracking expenses with my custom app, every service tells a story of problem-solving and discovery. If you’re curious about home servers, I encourage you to dive in. Start small, experiment, and join the growing community of self-hosters. What’s your home server story? Let me know in the comments or on X!&lt;/p&gt;

</description>
      <category>homeserver</category>
      <category>selfhosted</category>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Did you know docker-compose only takes environment variables from`.env` only?</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Sun, 11 Jun 2023 05:25:45 +0000</pubDate>
      <link>https://dev.to/ashraful/did-you-know-docker-compose-only-takes-environment-variables-fromenv-only-4nkf</link>
      <guid>https://dev.to/ashraful/did-you-know-docker-compose-only-takes-environment-variables-fromenv-only-4nkf</guid>
      <description>&lt;p&gt;Recently I have encountered an issue with &lt;code&gt;docker compose&lt;/code&gt;. I have separated the environment files and declared to&lt;code&gt;docker-compose.yaml&lt;/code&gt; like following,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;env_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.dev_env&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I ran &lt;code&gt;docker compose up&lt;/code&gt;, it was not working on the compose itself but the container. because of environment variables. Then I found following solution for this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker compose --env-file .dev_env --env-file .prod_env up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>discuss</category>
      <category>docker</category>
      <category>compose</category>
      <category>env</category>
    </item>
    <item>
      <title>Windows is not that bad for software development</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Fri, 01 Jul 2022 17:34:06 +0000</pubDate>
      <link>https://dev.to/ashraful/windows-is-not-that-bad-for-software-development-2joi</link>
      <guid>https://dev.to/ashraful/windows-is-not-that-bad-for-software-development-2joi</guid>
      <description>&lt;p&gt;&lt;strong&gt;Today it's not about virtualbox/vmware. It's a new Windows 10 feature. They called it WSL(Windows Subsystem for Linux)&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Originally Posted on &lt;a href="https://blog.ashraful.dev/posts/windows-is-not-that-bad.html" rel="noopener noreferrer"&gt;HERE&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What is WSL?
&lt;/h2&gt;

&lt;p&gt;Already told &lt;strong&gt;Windows Subsystem for Linux&lt;/strong&gt;. Windows made sure to install Linux operating system inside the Windows environment without extra virtualization software(Virtual Box, VmWare, ...). So, How they did that?? That's burning question. Today I'll not go that way. But one thing I can tell that is, they slightly modified the kernel to fit with windows. Isn't it awesome?? YES! I am really exited to share about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we are going to cover today?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites&lt;/li&gt;
&lt;li&gt;Enable WSL.&lt;/li&gt;
&lt;li&gt;Install Ubuntu (22.04 Maybe).&lt;/li&gt;
&lt;li&gt;Set WSL Resource.&lt;/li&gt;
&lt;li&gt;Install Docker on Ubuntu.&lt;/li&gt;
&lt;li&gt;VSCode WSL development.&lt;/li&gt;
&lt;li&gt;Intellij IDE WSL development.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You must be running Windows 10 version 2004 and higher (Build 19041 and higher) or Windows 11.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To check your Windows version and build number, select &lt;strong&gt;Windows logo key + R&lt;/strong&gt;, type &lt;strong&gt;winver&lt;/strong&gt;, select OK.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Enabling WSL
&lt;/h2&gt;

&lt;p&gt;You can enable from GUI as well as from Powershell. I always prefer the hard way(command line). So, Run the following commands on Powershell.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Enabling the Virtualization Technology
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Now you need to restart the machine.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Install WSL Update
&lt;/h3&gt;

&lt;p&gt;Downthe the update installer from &lt;a href="https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi" rel="noopener noreferrer"&gt;here&lt;/a&gt; and Run the installer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Set WSL2 as default WSL version.
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wsl --set-default-version 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update existing distribution (if any)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wsl --set-version &amp;lt;distribution name&amp;gt; 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install Ubuntu (22.04)
&lt;/h2&gt;

&lt;p&gt;There are several ways to install the OS in WSL. You can download from the &lt;a href="https://apps.microsoft.com/store/detail/ubuntu/9PDXGNCFSCZV" rel="noopener noreferrer"&gt;Windows Store&lt;/a&gt;. This is the hassel free process. Once installed you will be able to login using your given password. &lt;strong&gt;Don't afraid, there is no UI. Only command line like Ubuntu Server :)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now open Powershell again,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wsl -l -v 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will get something like following,&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjgz0fcyr49rez00mqj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjgz0fcyr49rez00mqj2.png" alt="WSL Version" width="597" height="183"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Set WSL Resource
&lt;/h2&gt;

&lt;p&gt;Yes. You can still have chance to modify your resource allocation. Otherwise your windows will decide to do so. The process is pretty simple. You need to create a file to a specific location.&lt;br&gt;&lt;br&gt;
Open the file,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;notepad "$env:USERPROFILE\.wslconfig"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add these lines to the editor and save it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[wsl2]
memory=4GB  
processors=4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;You can read the &lt;a href="https://docs.microsoft.com/en-us/windows/wsl/wsl-config" rel="noopener noreferrer"&gt;microsoft's document&lt;/a&gt; for more configuration&lt;/em&gt;.&lt;br&gt;&lt;br&gt;
Now login to the ubuntu again, with Powershell/CMD whatever shell you have.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wsl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxtjelvo58hsacrwe5hv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxtjelvo58hsacrwe5hv.png" alt="WSL Login" width="594" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Docker on WSL(Ubuntu)
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;It's not mandatory. So, don't push yourself into Docker if your are not a Docker guy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Installing Docker is same guide as ubuntu. You can follow the official &lt;a href="https://docs.docker.com/engine/install/ubuntu/" rel="noopener noreferrer"&gt;Docker docs&lt;/a&gt; OR blindly follow me. Run the following commands on the Ubuntu's terminal &lt;strong&gt;not Powershell&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Update the apt package index and install packages to allow apt to use a repository over HTTPS:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo apt update
$ sudo apt install \
   ca-certificates \
   curl \
   gnupg \
   lsb-release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add Docker’s official GPG key:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/apt/keyrings
&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Use the following command to set up the repository
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install Docker Engine
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;docker-ce docker-ce-cli containerd.io docker-compose-plugin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Start the docker and check
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo service docker start
$ sudo service docker status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Test your docker installation
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo docker run hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8lzhugmaaqs740uwv7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8lzhugmaaqs740uwv7e.png" alt="WSL Docker Hello" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, Docker is working.&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Incase any Network issue from WSL
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ echo -e "[network]\ngenerateResolvConf = false" | sudo tee -a /etc/wsl.conf
$ sudo unlink /etc/resolv.conf
$ echo nameserver 1.1.1.1 | sudo tee /etc/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  IPTables related issues
&lt;/h3&gt;

&lt;p&gt;I found some people have iptables related errors. So I recommend the following command and &lt;strong&gt;choose legacy&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;update-alternatives &lt;span class="nt"&gt;--config&lt;/span&gt; iptables
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting up VSCode
&lt;/h2&gt;

&lt;p&gt;Actually there is nothing to setup. Trust me and do the following things,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You just need to install an extension. You can get it &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-wsl" rel="noopener noreferrer"&gt;from here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Create a project on WSL(ubuntu)&lt;/li&gt;
&lt;li&gt;On the bottom left corner you will see remote development button. &lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;New WSL Window&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Open the project from WSL(Ubuntu)
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnfxwlvkwjngq2hxgkaa.png" alt="VSCode" width="800" height="501"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Now it's all yours.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Intellij IDE (Pycharm in my case.)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Using Intellij IDE (Pycharm) codes stays on the windows machine.&lt;/li&gt;
&lt;li&gt;Interpreter(Environment) will be just shared from WSL.
The step by step guide is &lt;a href="https://www.jetbrains.com/help/pycharm/using-wsl-as-a-remote-interpreter.html#configure-wsl" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I've talked too much today. But it made me to do so. But, Why I am writing this post?&lt;br&gt;&lt;br&gt;
&lt;strong&gt;In short&lt;/strong&gt;, After 7-8 years I have been trying windows 10. I am so pleased to see the WSL and how it works. So, I thought people may like it too. Thank you.&lt;/p&gt;

</description>
      <category>windows</category>
      <category>development</category>
      <category>wsl</category>
      <category>linux</category>
    </item>
    <item>
      <title>FastAPI Streaming Response</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Thu, 30 Jun 2022 05:24:31 +0000</pubDate>
      <link>https://dev.to/ashraful/fastapi-streaming-response-39c5</link>
      <guid>https://dev.to/ashraful/fastapi-streaming-response-39c5</guid>
      <description>&lt;h2&gt;
  
  
  What is a streaming response?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Streaming Response&lt;/strong&gt; basically stream the data. So, how this happen?? Let's say you have a good amount of data. For example, 10MB text data. How you will send data through API? You might get timeout, other network issues for downloading the such a data from server. So, Streaming response come in first place to resolve the issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works?
&lt;/h2&gt;

&lt;p&gt;It's really simple. Think how your downloader works, chunk by chunk. So, the Streaming response is. Your 10MB will be downloaded chunk by chunk. In the technical language we call it multipart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streaming Response in FastAPI
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Generator&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;starlette.responses&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StreamingResponse&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;HTTPException&lt;/span&gt;

&lt;span class="c1"&gt;# A simple method to open the file and get the data
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_data_from_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Generator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;file_like&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;file_like&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Now response the API
&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_image_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;file_contents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_image_from_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StreamingResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;file_contents&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HTTP_200_OK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;media_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text/html&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;FileNotFoundError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;HTTPException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;detail&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;File not found.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HTTP_404_NOT_FOUND&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You just use the function &lt;code&gt;get_image_file&lt;/code&gt; and you'll get your desired Streaming response.  &lt;/p&gt;

&lt;h2&gt;
  
  
  Why I am writing about it?
&lt;/h2&gt;

&lt;p&gt;Because there is a twist about it. Starlette(The mother of FastAPI) encountered a bug about async generator. To working around the issue in FastAPI it created another issue. When you use sync generator for serving file or stream the response it because really slow.&lt;br&gt;
&lt;a href="https://github.com/encode/starlette/issues/793" rel="noopener noreferrer"&gt;Bug link here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Moral of the story&lt;/strong&gt; We should use the async generator for serving/streaming the API. Implementation is here,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Generator&lt;/span&gt;

&lt;span class="c1"&gt;# Just use the async function you already have. :)
&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_data_from_file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Generator&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;file_like&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="n"&gt;file_like&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Now I am good to go. &lt;/p&gt;

</description>
      <category>python</category>
      <category>fastapi</category>
      <category>webdev</category>
      <category>async</category>
    </item>
    <item>
      <title>Introduction to Scylla DB</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Wed, 22 Dec 2021 07:24:17 +0000</pubDate>
      <link>https://dev.to/ashraful/introduction-to-scylla-db-342b</link>
      <guid>https://dev.to/ashraful/introduction-to-scylla-db-342b</guid>
      <description>&lt;h3&gt;
  
  
  Originally Posted on &lt;a href="https://blog.ashraful.dev/posts/introduction-to-scylladb.html" rel="noopener noreferrer"&gt;Ashraful's Blog&lt;/a&gt;
&lt;/h3&gt;

&lt;h2&gt;
  
  
  What is Scylla DB?
&lt;/h2&gt;

&lt;p&gt;Scylla is a realtime NoSQL database, written into C++ without changing the low level API of Apache Cassandra. So, it's an drop-in replacement for Apache Cassandra.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;NoSQL Database&lt;/li&gt;
&lt;li&gt;Open Source&lt;/li&gt;
&lt;li&gt;Good Performance&lt;/li&gt;
&lt;li&gt;Consistency&lt;/li&gt;
&lt;li&gt;Availability&lt;/li&gt;
&lt;li&gt;Scalability&lt;/li&gt;
&lt;li&gt;Drop-in replacement for Apache Cassandra&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;Scylla is a database that scales out and up. Scylla adopted much of its distributed scale-out design from the Apache Cassandra project (which adopted distribution concepts from Amazon Dynamo and data modeling concepts from Google BigTable).&lt;/p&gt;

&lt;h4&gt;
  
  
  Node
&lt;/h4&gt;

&lt;p&gt;A node is the basic unit of organization for a Scylla database. It is comprised of the Scylla database server software running on a computer server.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cluster
&lt;/h4&gt;

&lt;p&gt;A Scylla cluster is consist of multiple nodes or Scylla instances. As we already know Scylla has built in flavor of distributed database system, So, Scylla visualized the nodes on a hash ring where multiple ring is connected to the hash.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbfe90d4pxbgcz1ukxi3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbfe90d4pxbgcz1ukxi3.jpg" alt="Image of ScyllaDB Hash Ring" width="503" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Keyspace
&lt;/h4&gt;

&lt;p&gt;A Scylla keyspace is a collection of tables with attributes that define how data is replicated on nodes. In general a keyspace is just like database in RDBMS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance
&lt;/h2&gt;

&lt;p&gt;According to Scylla's documentation, Scylla has much better performance than Cassandra. A full post based on performance. Here, &lt;a href="https://www.scylladb.com/2021/08/24/apache-cassandra-4-0-vs-scylla-4-4-comparing-performance/" rel="noopener noreferrer"&gt;Scylla vs Cassandra Performance Benchmark&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftudlfwtqb5fzoqimfy27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftudlfwtqb5fzoqimfy27.png" alt="Scylla vs Cassandra" width="800" height="549"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Replication Factor
&lt;/h2&gt;

&lt;p&gt;Scylla replicates data according to replication factor(RF). Basically number of RF is equal to the number of nodes in a cluster. But we can configure according to our need. For example, RF = 3 and number of nodes is 5. In this configuration scylla will write to two nodes per request out of three nodes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2aer8h3yjk4t069uax2j.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2aer8h3yjk4t069uax2j.jpg" alt="RF=3 on nodes 5" width="689" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Consistency Level
&lt;/h2&gt;

&lt;p&gt;The Consistency Level (CL) determines how many replicas in a cluster must acknowledge a read or write operation before it is considered successful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some of the most common Consistency Levels used are:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ANY&lt;/strong&gt; – A write must be written to at least one replica in the cluster. A read waits for a response from at least one replica. It provides the highest availability with the lowest consistency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;QUORUM&lt;/strong&gt; – When a majority of the replicas respond, the request is honored. If RF=3, then 2 replicas respond. QUORUM can be calculated using the formula (n/2 +1) where n is the Replication Factor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ONE&lt;/strong&gt; – If one replica responds; the request is honored.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LOCAL_ONE&lt;/strong&gt; – At least one replica in the local data center responds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LOCAL_QUORUM&lt;/strong&gt; – A quorum of replicas in the local datacenter responds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EACH_QUORUM&lt;/strong&gt; – (unsupported for reads) – A quorum of replicas in ALL datacenters must be written to.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ALL&lt;/strong&gt; – A write must be written to all replicas in the cluster, a read waits for a response from all replicas. Provides the lowest availability with the highest consistency.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  RF vs CL
&lt;/h2&gt;

&lt;p&gt;Replication Factor(RF) and Consistency Level(CL) are too much related to performance and high availability. If CL is ALL then cluster is highly consistent but it might have unavailability. For example, if one node is down then the acknowledgement will not received so, it'll fail. Because we already configured to CL = ALL.&lt;/p&gt;

&lt;p&gt;The following image has RF=3 and CL=1. It means during read/write servers will replicate to 3 nodes but only one acknowledgement is required to pass the request. So, it's highly available. If more than one nodes down data will not fail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frf9op2cnpvt0qh7saccr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frf9op2cnpvt0qh7saccr.jpg" alt="RF vs CL" width="689" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;To summarize, these are the main points we covered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scylla has a ring-type architecture&lt;/li&gt;
&lt;li&gt;It’s a distributed, highly available, high performance, low maintenance, highly scalable NoSQL database&lt;/li&gt;
&lt;li&gt;In Scylla all nodes are created equal, there are no master/slave nodes&lt;/li&gt;
&lt;li&gt;Data is automatically distributed and replicated on the cluster according to the replication strategy&lt;/li&gt;
&lt;li&gt;Scylla supports multiple data centers&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>database</category>
      <category>scylla</category>
      <category>cassandra</category>
      <category>nosql</category>
    </item>
    <item>
      <title>Python Unit Testing</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Sun, 13 Jun 2021 16:47:49 +0000</pubDate>
      <link>https://dev.to/ashraful/python-unit-testing-2n25</link>
      <guid>https://dev.to/ashraful/python-unit-testing-2n25</guid>
      <description>&lt;p&gt;&lt;strong&gt;Originally Posted on &lt;a href="https://blog.ashraful.dev/posts/python-unittesting.html" rel="noopener noreferrer"&gt;Ashraful's Blog&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Testing❓
&lt;/h2&gt;

&lt;p&gt;Testing is basically checking the features are okay or not and finding bugs on the system. Basically, there are many types of testing we do with software. Today we will talk about the most famous unit testing process. Let's keep going.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a unit testing❓
&lt;/h2&gt;

&lt;p&gt;A unit test is a way of testing a unit - the smallest piece of code that can be logically isolated in a system. In most programming languages, that is a function, a subroutine, a method or property.&lt;/p&gt;

&lt;h2&gt;
  
  
  Python's builtin unittest 💥
&lt;/h2&gt;

&lt;p&gt;Let's try some functions and their unit tests&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# test_add.py
# A very basic function for adding two numbers
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;

&lt;span class="c1"&gt;# Writing Unit Test
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;unittest&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;TryingTheAwesomeUnitTest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;unittest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TestCase&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;assertEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;unittest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the file as &lt;code&gt;test_add.py&lt;/code&gt; and run the file &lt;code&gt;python3 test_add.py&lt;/code&gt; and see the following output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Introduction to PyTest 🚀
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://docs.pytest.org/" rel="noopener noreferrer"&gt;pytest&lt;/a&gt;: helps you write better programs&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Pytest is another testing library for Python. Let's dig into it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installation💡
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install pytest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# test_2_add.py
&lt;/span&gt;
&lt;span class="c1"&gt;# The same old function
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_add&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;9&lt;/span&gt; &lt;span class="c1"&gt;# I want see the fail response
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run the tests🐛
&lt;/h3&gt;

&lt;p&gt;Just type &lt;code&gt;pytest&lt;/code&gt; on the directory where you have saved the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pytest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Result🙈
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;============================================ test session starts =============================================
platform darwin -- Python 3.8.1, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /Users/ashraful/Public/scripts
collected 1 item

test_2_add.py F                                                                                        [100%]

================================================== FAILURES ==================================================
__________________________________________________ test_add __________________________________________________

    def test_add():
&amp;gt;       assert add(3, 5) == 9 # I want see the fail response
E       assert 8 == 9
E        +  where 8 = add(3, 5)

test_2_add.py:8: AssertionError
========================================== short test summary info ===========================================
FAILED test_2_add.py::test_add - assert 8 == 9
============================================= 1 failed in 0.04s ==============================================
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Tips:&lt;/strong&gt; Don't forget to put the filename &lt;code&gt;test_&lt;/code&gt; as prefix otherwise pytest can't detect the file. Whatever &lt;code&gt;test_&lt;/code&gt; as prefix is mandatory convention for test case.&lt;/p&gt;

</description>
      <category>python</category>
      <category>unittest</category>
      <category>pytest</category>
      <category>testing</category>
    </item>
    <item>
      <title>Introduction to API Gateway</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Sun, 07 Feb 2021 10:09:25 +0000</pubDate>
      <link>https://dev.to/ashraful/introduction-to-api-gateway-22km</link>
      <guid>https://dev.to/ashraful/introduction-to-api-gateway-22km</guid>
      <description>&lt;p&gt;&lt;strong&gt;Originally posted on &lt;a href="https://ashraful.dev/posts/introduction-to-api-gateway.html" rel="noopener noreferrer"&gt;Ashraful's Blog&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 What is an API Gateway❓
&lt;/h2&gt;

&lt;p&gt;An API gateway is an API management tool that sits between a client and a collection of backend services.&lt;/p&gt;

&lt;p&gt;An API gateway acts as a reverse proxy to accept all application programming interface (API) calls, aggregate the various services required to fulfill them, and return the appropriate result.&lt;br&gt;
&lt;strong&gt;In short:&lt;/strong&gt; An API Gateway is the abstraction layer between client and microservices.&lt;/p&gt;
&lt;h2&gt;
  
  
  🔥 When to use an API Gateway❓
&lt;/h2&gt;

&lt;p&gt;Let's imagine that you are building a native mobile application for android and ios. In that case, you have two teams to develop the apps. On the backend side, you have separate services like Auth, Product, Cart, Order, etc. And you perfectly deploy the services. Let's assume the following IPs are assigned to services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Auth     192.168.1.10
Product  192.168.1.11
Cart     192.168.1.12
Order    192.168.1.13
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you tell your app team to call API according to the services' IP/Domain. Okay Good. Your business is good.&lt;br&gt;&lt;br&gt;
After some days you realized to separate the Order and Payment service. Your backend team finished the job. And now the painful part is releasing the app(Android, iOS) and make sure that everyone gets the update. Both teams need to work just to change the API URL. Not only that, problems like this can arise at any time. So, here is a simple solution is, "API Gateway".&lt;/p&gt;
&lt;h2&gt;
  
  
  🎉 How an API Gateway is solving your problem❓
&lt;/h2&gt;

&lt;p&gt;We just talk about the problem. Now let's talk about the solution. We just need to check the request URI and pass the request to the respected service. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Something like this.
&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;request_uri&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/api/login/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
   &lt;span class="nf"&gt;pass_the_request&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;auth_service&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are lots of API gateways that are already written for multiple purposes. For a small project, I'd like to use Nginx's reverse proxy. We'll talk about it later.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>apigateway</category>
      <category>api</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Horizontal vs Vertical Scaling</title>
      <dc:creator>Mohamad Ashraful Islam</dc:creator>
      <pubDate>Tue, 22 Sep 2020 14:45:31 +0000</pubDate>
      <link>https://dev.to/ashraful/horizontal-vs-vertical-scaling-13h4</link>
      <guid>https://dev.to/ashraful/horizontal-vs-vertical-scaling-13h4</guid>
      <description>&lt;h2&gt;
  
  
  💡 Idea Behind the Scaling
&lt;/h2&gt;

&lt;p&gt;Suppose I have a business application and I want to access it from anywhere over the internet, so I can make money. So, how I supposed to do that? I know you are brilliant and you have the answer. Yes!! I will rent a machine from a cloud provider and host it traditionally.&lt;br&gt;
After some days my app becomes very popular and so many people are using it. Now the real problem begins. People are facing downtime and I am losing customers 🤦. So, I asked one of my friends and he told me to scale up the machine to solve the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  💥 What is Scalability?
&lt;/h2&gt;

&lt;p&gt;Scalability is able to handle more requests with a bigger machine or adding more machines. There are two types of scaling,  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Vertical Scaling
&lt;/li&gt;
&lt;li&gt;Horizontal Scaling
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Vertical Scaling&lt;/strong&gt;&lt;br&gt;
In one sentence, Buying a bigger machine called vertical scaling. I mean a single machine serves all the requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Horizontal Scaling&lt;/strong&gt;&lt;br&gt;
Adding more than one machine is called horizontal scaling. I mean you have already a machine and you add more machines with it to serve the requests. &lt;/p&gt;

&lt;h2&gt;
  
  
  🍻 Pros and Cons
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Horizontal&lt;/th&gt;
&lt;th&gt;Vertical&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1. Load balancer required&lt;/td&gt;
&lt;td&gt;1. N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2. Call over the network (Little Slower)&lt;/td&gt;
&lt;td&gt;2. Interprocess communication (Faster)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3. No problem until all the servers crash&lt;/td&gt;
&lt;td&gt;3. Single point of failure&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4. Data Inconsistency&lt;/td&gt;
&lt;td&gt;4. Consistent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5. Nicely scale up according to need&lt;/td&gt;
&lt;td&gt;5. Hardware limits at some point.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  🚀 Conclusion
&lt;/h2&gt;

&lt;p&gt;So, now the big question is &lt;strong&gt;Which one should we use ❓&lt;/strong&gt; My answer is &lt;strong&gt;Both&lt;/strong&gt;. This actually depends on the system you are designing. Both the scaling system have some good sides. You just need to think about how much big a system you are going to build and how many requests you need to serve in each second and is it consistent to use this model. That it. You'll have your answer. &lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>server</category>
      <category>loadbalancing</category>
      <category>scaling</category>
    </item>
  </channel>
</rss>
