<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tejas Patil</title>
    <description>The latest articles on DEV Community by Tejas Patil (@tejaspatilblogs).</description>
    <link>https://dev.to/tejaspatilblogs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tejaspatilblogs"/>
    <language>en</language>
    <item>
      <title>⚡ Lightning-Fast Face Recognition with InsightFace + SORT + Qdrant</title>
      <dc:creator>Tejas Patil</dc:creator>
      <pubDate>Mon, 10 Nov 2025 02:30:00 +0000</pubDate>
      <link>https://dev.to/tejaspatilblogs/lightning-fast-face-recognition-with-insightface-sort-qdrant-5a85</link>
      <guid>https://dev.to/tejaspatilblogs/lightning-fast-face-recognition-with-insightface-sort-qdrant-5a85</guid>
      <description>&lt;p&gt;Most face-recognition systems fail in real life.&lt;br&gt;
Why? Because they try to recognize faces every single frame — slow, unstable, and guaranteed to misidentify people the moment they move their head.&lt;/p&gt;

&lt;p&gt;The solution is not “more detection.”&lt;br&gt;
The solution is SORT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 Why SORT Changes Everything&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;SORT (Simple Online &amp;amp; Real-Time Tracking) gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persistent IDs across frames&lt;/li&gt;
&lt;li&gt;No flickering names when a person turns their head&lt;/li&gt;
&lt;li&gt;High FPS (because you don’t detect every frame)&lt;/li&gt;
&lt;li&gt;Stable embeddings (recognize once → reuse for multiple frames)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of SORT as the glue that holds your face-recognition pipeline together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔥 The Recognition Pipeline (Super Simple)&lt;/strong&gt;&lt;br&gt;
1️⃣ Detect faces only every N frames&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if frame_id % 3 == 0:
    faces = face_app.get(frame)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ Use SORT to track faces between detections&lt;br&gt;
&lt;code&gt;tracked = tracker.update(detections)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;3️⃣ Assign embeddings via IoU matching&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for det_bbox, det_emb in last_face_map.items():
    if compute_iou(track_bbox, det_bbox) &amp;gt; 0.45:
        track_embeddings[track_id] = det_emb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4️⃣ Search identity from Qdrant&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hits = client.search(collection_name="faces", query_vector=embedding.tolist(), limit=1)
if hits and hits[0].score &amp;gt; 0.75:
    name = hits[0].payload["name"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5️⃣ Register new users instantly&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if key=='r' and unknown_face:
    client.upsert("faces", [PointStruct(id=uuid4(), vector=emb.tolist(), payload={"name": name})])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;🎯 Why This Works So Damn Well&lt;/strong&gt;&lt;br&gt;
Traditional Method                      Our SORT Method&lt;br&gt;
Detect every frame → low FPS       Detect periodically → high FPS&lt;br&gt;
Identity flickers                      Consistent identity&lt;br&gt;
CPU overload                                   Efficient&lt;br&gt;
Recompute embeddings repeatedly          Compute once per track&lt;/p&gt;

&lt;p&gt;SORT turns your pipeline from “weak and jittery” to smooth, stable, and lightning fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧠 InsightFace for Embeddings&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;InsightFace gives crisp 512-dimensional embeddings:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;emb = normalize(face.embedding)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;These vectors go straight into Qdrant to enable fast similarity search.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🗂️ Qdrant as the Face Database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Qdrant stores embeddings like a search engine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client.create_collection(
    collection_name="faces",
    vectors_config=VectorParams(size=512, distance=Distance.COSINE)
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Querying is instant — even with tens of thousands of faces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔄 Putting It All Together&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your real-time loop becomes:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Detect → Track → Attach Embedding → Qdrant Search → Show Name&lt;/code&gt;  &lt;/p&gt;

&lt;p&gt;Instead of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Detect → Recognize → Detect → Recognize → Detect → Recognize → (lag forever)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;🏁 Final Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The winning formula is simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;InsightFace → reliable embeddings&lt;br&gt;
SORT → stable tracking&lt;br&gt;
Qdrant → lightning-fast comparison&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Together, they create a recognition system that actually works in the real world —&lt;br&gt;
fast, smooth, accurate, and scalable.&lt;/p&gt;

</description>
      <category>facialrecognition</category>
      <category>python</category>
      <category>ai</category>
      <category>programming</category>
    </item>
    <item>
      <title>Image blur detection using scipy</title>
      <dc:creator>Tejas Patil</dc:creator>
      <pubDate>Mon, 13 Oct 2025 04:37:36 +0000</pubDate>
      <link>https://dev.to/tejaspatilblogs/image-blur-detection-using-scipy-1ilf</link>
      <guid>https://dev.to/tejaspatilblogs/image-blur-detection-using-scipy-1ilf</guid>
      <description>&lt;p&gt;When you work with images — especially in real-time systems — one tiny issue can ruin your entire pipeline: blur.&lt;/p&gt;

&lt;p&gt;A blurry image means unreliable results.&lt;br&gt;
But how do you detect blur accurately without slowing everything down?&lt;/p&gt;

&lt;p&gt;That’s exactly what I set out to solve — and after testing multiple approaches, I found that sometimes, you don’t need fancy methods.&lt;br&gt;
Even simple ones can work surprisingly well if used right.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Goal: Detect Blur Efficiently&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In my project, I needed a blur detection method that could:&lt;/p&gt;

&lt;p&gt;⚡ Work fast for real-time image capture&lt;/p&gt;

&lt;p&gt;💻 Run on limited hardware (like Raspberry Pi)&lt;/p&gt;

&lt;p&gt;🧩 Be lightweight and easy to integrate&lt;/p&gt;

&lt;p&gt;Simple requirements — but meeting all three turned out to be a journey. 😅&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚙️ Attempt 1: Tenengrad (Using OpenCV)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I started with the Tenengrad method, a classic sharpness measure using Sobel gradients.&lt;/p&gt;

&lt;p&gt;It’s accurate and mathematically solid — but there was a big catch:&lt;/p&gt;

&lt;p&gt;💾 OpenCV’s footprint was heavy.&lt;/p&gt;

&lt;p&gt;On resource-limited devices, the disk usage and installation size made it a deal-breaker.&lt;br&gt;
So, I moved on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔍 Attempt 2: SciPy’s convolve2d&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To keep things lightweight, I tried using SciPy’s convolve2d to apply Sobel filters manually.&lt;/p&gt;

&lt;p&gt;It worked decently — small footprint, fast execution, and minimal dependencies.&lt;br&gt;
But…&lt;/p&gt;

&lt;p&gt;⚠️ Accuracy dropped for low-texture or unevenly lit images.&lt;br&gt;
A few blurry images were being classified as “clear.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚡ Attempt 3: FFT-Based Blur Detection&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Then came the FFT (Fast Fourier Transform) approach.&lt;/p&gt;

&lt;p&gt;The idea is elegant — sharp images have more high-frequency content, while blurry ones don’t.&lt;/p&gt;

&lt;p&gt;But in practice:&lt;/p&gt;

&lt;p&gt;❌ Too slow&lt;br&gt;
❌ Too complex for real-time use&lt;/p&gt;

&lt;p&gt;FFT-based methods are great for research or offline analysis, but not for a live camera feed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🤖 Attempt 4: PIQ (PyTorch Image Quality)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, I explored PIQ, a PyTorch-based library that measures perceptual quality — including blur.&lt;/p&gt;

&lt;p&gt;It was extremely accurate, no doubt about that.&lt;/p&gt;

&lt;p&gt;But…&lt;/p&gt;

&lt;p&gt;⏳ Too slow on CPU&lt;br&gt;
⚙️ Required CUDA for speed&lt;br&gt;
🚫 Overkill for lightweight systems&lt;/p&gt;

&lt;p&gt;So I had to drop it too.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧩 The Winner: SciPy’s ndimage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After several trials, I circled back to something simple — scipy.ndimage.&lt;/p&gt;

&lt;p&gt;It lets you compute Sobel gradients efficiently and measure how much variation exists in the gradient magnitudes — a direct indicator of image sharpness.&lt;/p&gt;

&lt;p&gt;Here’s the magic in just a few lines 👇&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#magnitude adjustment can be done based on your requirements
from scipy import ndimage
import numpy as np

def estimate_blur(image):
    gx = ndimage.sobel(image, axis=0)
    gy = ndimage.sobel(image, axis=1)
    magnitude = np.hypot(gx, gy)
    return magnitude.var()  # Higher variance = sharper image
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it.&lt;br&gt;
No heavy libraries, no CUDA setup — and it just works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✅ Why It Worked Best&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;🪶 Lightweight — no OpenCV dependency&lt;/p&gt;

&lt;p&gt;⚡ Fast — great for real-time use&lt;/p&gt;

&lt;p&gt;🎯 Accurate enough for production&lt;/p&gt;

&lt;p&gt;🔧 Easy to integrate anywhere&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;💬 Why Not Always Go Fancy?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sometimes, we chase fancy methods because they sound more “AI-powered” or “modern.”&lt;br&gt;
But in real-world systems, especially those running on edge devices, simplicity wins.&lt;/p&gt;

&lt;p&gt;You don’t always need deep learning or complex transforms — sometimes a clean, optimized classical method can outperform heavy models when implemented smartly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;💡 Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Finding the “best” blur detection method isn’t about using the most advanced algorithm — it’s about what fits your use case and environment.&lt;/p&gt;

&lt;p&gt;For me, scipy.ndimage struck the perfect balance between speed, accuracy, and simplicity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 What’s Next&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I plan to enhance this further by adding:&lt;/p&gt;

&lt;p&gt;Edge density analysis&lt;/p&gt;

&lt;p&gt;Variance normalization&lt;/p&gt;

&lt;p&gt;Adaptive thresholding&lt;/p&gt;

&lt;p&gt;to make it more robust under challenging lighting or texture variations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧾 Summary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;👉 Tenengrad (OpenCV) — dropped due to heavy footprint&lt;br&gt;
👉 SciPy’s convolve2d — fast but less accurate&lt;br&gt;
👉 FFT — too slow for real-time&lt;br&gt;
👉 PIQ — accurate but heavy&lt;br&gt;
✅ SciPy’s ndimage — perfect balance&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;💭 Final Takeaway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sometimes, the simplest approach — done right — beats the most complex one.&lt;br&gt;
In engineering, elegance lies in simplicity. ✨&lt;/p&gt;

</description>
      <category>opencv</category>
      <category>python</category>
      <category>raspberrypi</category>
    </item>
    <item>
      <title>ESRGAN to boost image resolution</title>
      <dc:creator>Tejas Patil</dc:creator>
      <pubDate>Mon, 29 Sep 2025 04:33:55 +0000</pubDate>
      <link>https://dev.to/tejaspatilblogs/esrgan-to-boost-image-resolution-3jk1</link>
      <guid>https://dev.to/tejaspatilblogs/esrgan-to-boost-image-resolution-3jk1</guid>
      <description>&lt;p&gt;&lt;strong&gt;How I Used ESRGAN to Boost Image Resolution — From Blurry to Crystal Clear&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Have you ever zoomed into an old photo only to be disappointed by a blocky, pixelated mess? That happened to me recently when I tried to restore some low-quality images. Traditional upscaling methods like bicubic interpolation just made the pictures bigger — not better. That’s when I turned to ESRGAN (Enhanced Super-Resolution Generative Adversarial Network), and the results were nothing short of magical.&lt;/p&gt;

&lt;p&gt;In this blog, I’ll share how I used ESRGAN to improve image resolution, what worked well, and how it compares with other alternatives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is ESRGAN in Simple Terms?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most image upscaling methods stretch pixels to make an image bigger, which often results in blurred edges. ESRGAN, however, takes a different approach.&lt;/p&gt;

&lt;p&gt;It uses deep learning — specifically a Generative Adversarial Network (GAN) — to recreate missing details. Instead of just enlarging pixels, ESRGAN imagines finer textures and sharper edges, making the final image look much more realistic.&lt;/p&gt;

&lt;p&gt;In short:&lt;/p&gt;

&lt;p&gt;Old method: Bigger, blurrier images.&lt;/p&gt;

&lt;p&gt;ESRGAN: Bigger, sharper, and more natural-looking images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How I Implemented ESRGAN&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I kept my workflow simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up the environment
Installed dependencies in Python using PyTorch and cloned pretrained ESRGAN repo.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; git clone https://github.com/xinntao/ESRGAN
 cd ESRGAN
 pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Loaded the model&lt;br&gt;
I used a pretrained ESRGAN model (RRDB_PSNR) since training from scratch requires a massive dataset and GPU resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Upscaled the images&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import cv2
import torch
from RRDBNet_arch import RRDBNet

# Load pretrained model
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64,
                num_block=23, num_grow_ch=32, scale=4)
model.load_state_dict(torch.load('RRDB_ESRGAN_x4.pth'), strict=True)
model.eval()

# Load image
img = cv2.imread('low_res.png')[:, :, ::-1]  # BGR to RGB
img = torch.from_numpy(img).float().permute(2,0,1).unsqueeze(0) / 255.

# Run ESRGAN
with torch.no_grad():
    out = model(img).clamp(0, 1)

# Save result
out_img = (out.squeeze().permute(1,2,0).numpy() * 255).astype('uint8')
cv2.imwrite('upscaled.png', out_img[:, :, ::-1])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Compared the result
The difference was instantly noticeable — textures looked more natural, and the image had much more detail.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Results &amp;amp; Observations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s what I noticed after applying ESRGAN:&lt;br&gt;
✅ Fine details like hair strands, text, and textures were restored.&lt;br&gt;
✅ The upscaled image looked far more realistic than traditional methods.&lt;br&gt;
✅ Performance was fairly fast with GPU acceleration.&lt;/p&gt;

&lt;p&gt;However, ESRGAN isn’t perfect:&lt;br&gt;
⚠️ On extremely noisy or compressed images, it sometimes “hallucinates” details that weren’t there originally.&lt;br&gt;
⚠️ Running on CPU is slower, especially for larger images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alternatives I Considered&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bicubic Interpolation: Quick, but results were blurry and unimpressive.&lt;/p&gt;

&lt;p&gt;SRCNN: Early deep learning model for super-resolution, but oversmoothed results.&lt;/p&gt;

&lt;p&gt;SRGAN: Predecessor of ESRGAN; good but not as sharp.&lt;/p&gt;

&lt;p&gt;Real-ESRGAN: A practical improvement that handles noisy, real-world images better.&lt;/p&gt;

&lt;p&gt;For my case (relatively clean images), ESRGAN worked best since I wanted maximum detail recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Super-resolution isn’t just for making your selfies sharper. It has real-world applications in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Medical imaging (clearer scans for diagnosis).&lt;/li&gt;
&lt;li&gt;Satellite/drone imagery (better detail for mapping).&lt;/li&gt;
&lt;li&gt;Gaming &amp;amp; media (upscaling old textures).&lt;/li&gt;
&lt;li&gt;Historical photo restoration (bringing old memories back to life).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using ESRGAN, I was able to transform low-resolution, pixelated images into sharp, high-resolution outputs that looked almost lifelike. The process was surprisingly straightforward with pretrained models, and the results were far superior to older methods.&lt;/p&gt;

&lt;p&gt;If you’re struggling with blurry images, I highly recommend giving ESRGAN a try. And if you’re dealing with real-world noisy photos, check out Real-ESRGAN for even better results.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Logging with Loki, python and slack integration</title>
      <dc:creator>Tejas Patil</dc:creator>
      <pubDate>Sun, 21 Sep 2025 12:48:30 +0000</pubDate>
      <link>https://dev.to/tejaspatilblogs/logging-with-loki-python-and-slack-integration-hg0</link>
      <guid>https://dev.to/tejaspatilblogs/logging-with-loki-python-and-slack-integration-hg0</guid>
      <description>&lt;p&gt;Modern applications generate a lot of logs — from API response times to critical errors. Instead of tailing logs manually, I wanted a way to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Collect logs in one place&lt;/li&gt;
&lt;li&gt;Query them easily&lt;/li&gt;
&lt;li&gt;Get notified in Slack when something breaks&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For this, I set up Grafana Loki as my log store, integrated it with Python, and added Slack alerts for production errors.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk through the full setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up Loki&lt;/li&gt;
&lt;li&gt;Configuring Python logging with Loki&lt;/li&gt;
&lt;li&gt;Querying Loki using Python&lt;/li&gt;
&lt;li&gt;Sending alerts to Slack if an error occurs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🔹 Step 1: Setting Up Loki&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, install Loki and Promtail (Promtail collects logs and ships them to Loki).&lt;br&gt;
&lt;strong&gt;Using Docker Compose&lt;/strong&gt;&lt;br&gt;
Create a &lt;code&gt;docker-compose.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3"

services:
  loki:
    image: grafana/loki:latest
    ports:
      - "3100:3100"
    command: -config.file=/etc/loki/local-config.yaml

  promtail:
    image: grafana/promtail:latest
    volumes:
      - /var/log:/var/log
    command: -config.file=/etc/promtail/config.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start Loki:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker-compose up -d&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now Loki should be running at:&lt;br&gt;
👉 &lt;code&gt;http://localhost:3100&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔹 Step 2: Configure Promtail&lt;/strong&gt;&lt;br&gt;
Promtail’s job is to send your logs into Loki.&lt;br&gt;
Example &lt;code&gt;config.yml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
  - job_name: system
    static_configs:
      - targets:
          - localhost
        labels:
          job: varlogs
          __path__: /var/log/*.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration ships logs from &lt;code&gt;/var/log/*.log&lt;/code&gt; into Loki.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔹 Step 3: Python Logging to Loki&lt;/strong&gt;&lt;br&gt;
Now let’s push Python application logs to Loki.&lt;br&gt;
Install the Loki Python handler:&lt;br&gt;
&lt;code&gt;pip install logging-loki&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Update your Python app (&lt;code&gt;app.py&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import logging
from logging_loki import LokiHandler

# Loki URL (adjust to your setup)
LOKI_URL = "http://localhost:3100/loki/api/v1/push"

# Configure logger
logger = logging.getLogger("python-app")
logger.setLevel(logging.ERROR)

handler = LokiHandler(
    url=LOKI_URL,
    tags={"application": "my-python-app"},
    version="1",
)

logger.addHandler(handler)

# Example: simulate an error
try:
    1 / 0
except Exception as e:
    logger.error("🚨 Production error occurred", exc_info=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, whenever your app throws an error, it gets pushed directly into Loki with structured metadata.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔹 Step 4: Querying Loki with Python&lt;/strong&gt;&lt;br&gt;
Sometimes, you may want to query Loki programmatically (e.g., check API response times or errors).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests

LOKI_QUERY_URL = "http://localhost:3100/loki/api/v1/query"
query = '{application="my-python-app"} |= "error"'

response = requests.get(LOKI_QUERY_URL, params={"query": query})
if response.status_code == 200:
    results = response.json()["data"]["result"]
    for log in results:
        print(log)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This lets you fetch error logs directly in Python.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔹 Step 5: Slack Alerts for Errors&lt;/strong&gt;&lt;br&gt;
Finally, let’s notify Slack when something breaks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a Slack Webhook&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to Slack → Apps → Create App → Incoming Webhooks&lt;/li&gt;
&lt;li&gt;Copy the Webhook URL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Python Code to Send Alerts&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests
import json
import logging
from logging_loki import LokiHandler

# Slack webhook URL
SLACK_WEBHOOK_URL = "https://hooks.slack.com/services/XXX/YYY/ZZZ"

def send_slack_alert(message: str):
    payload = {"text": message}
    headers = {"Content-Type": "application/json"}
    res = requests.post(SLACK_WEBHOOK_URL, data=json.dumps(payload), headers=headers)
    if res.status_code != 200:
        print(f"Slack alert failed: {res.text}")

# Loki logger
LOKI_URL = "http://localhost:3100/loki/api/v1/push"
logger = logging.getLogger("python-app")
logger.setLevel(logging.ERROR)
logger.addHandler(LokiHandler(url=LOKI_URL, tags={"app": "python-app"}, version="1"))

# Example: simulate production error
try:
    result = 1 / 0
except Exception as e:
    error_message = f"🚨 Error in production: {str(e)}"
    logger.error(error_message, exc_info=True)  # send to Loki
    send_slack_alert(error_message)             # send to Slack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;🎯 Final Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python app runs in production&lt;/li&gt;
&lt;li&gt;Any error → Logged to Loki&lt;/li&gt;
&lt;li&gt;Same error → Triggered as a Slack notification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives me two layers of monitoring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A full log history in Loki (via Grafana dashboards)&lt;/li&gt;
&lt;li&gt;Real-time notifications in Slack&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;✅ Conclusion&lt;/strong&gt;&lt;br&gt;
By combining Loki, Python, and Slack, I now have a centralized logging and alerting system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loki collects and indexes all logs&lt;/li&gt;
&lt;li&gt;Python pushes structured errors directly&lt;/li&gt;
&lt;li&gt;Slack instantly alerts the team when something breaks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup has already saved me debugging time and ensures I never miss a production issue.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
