<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: gu1lh3rm3_x</title>
    <description>The latest articles on DEV Community by gu1lh3rm3_x (@gu1lh3rm3_x).</description>
    <link>https://dev.to/gu1lh3rm3_x</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gu1lh3rm3_x"/>
    <language>en</language>
    <item>
      <title>Cost-Effective Log Management Strategy</title>
      <dc:creator>gu1lh3rm3_x</dc:creator>
      <pubDate>Sat, 20 Sep 2025 03:10:35 +0000</pubDate>
      <link>https://dev.to/gu1lh3rm3_x/cost-effective-log-management-strategy-2co2</link>
      <guid>https://dev.to/gu1lh3rm3_x/cost-effective-log-management-strategy-2co2</guid>
      <description>&lt;p&gt;At some point in the lifecycle of any application, log management becomes essential. Ideally, teams should start monitoring logs as early as possible, but in practice, logs often get deprioritized until a real problem arises. While cloud providers like AWS CloudWatch or Google Cloud Logging (Cloud Run) offer built-in solutions, they are not always the most convenient or cost-efficient tools for deeper log analysis.&lt;/p&gt;

&lt;p&gt;Another challenge is sharing access. If you want another team—such as support, security, or compliance—to review logs, are you really going to grant them AWS or GCP access? That introduces unnecessary risk, since cloud environments contain sensitive resources beyond just logs.&lt;/p&gt;

&lt;p&gt;So, the question becomes: how do we handle logs efficiently, securely, and cost-effectively?&lt;/p&gt;

&lt;h3&gt;
  
  
  The Cost Challenge of Logs
&lt;/h3&gt;

&lt;p&gt;Most log management platforms (e.g., Datadog, Splunk, New Relic, Elastic Cloud) charge based on data volume ingested and retention period. This means costs scale directly with:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log volume per month (e.g., 500GB vs 2TB).&lt;/li&gt;
&lt;li&gt;Retention requirements (e.g., 7 days vs 90 days).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For example, retaining 1TB of searchable logs for 90 days can quickly rack up thousands of dollars in costs. If you don’t plan ahead, you might be paying for hot storage on data you’ll rarely query.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Understand Your Log Requirements
&lt;/h3&gt;

&lt;p&gt;Before designing your logging pipeline, answer these key questions:&lt;/p&gt;

&lt;p&gt;1 &lt;strong&gt;What is our log volume?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Example: 2TB/month.&lt;/li&gt;
&lt;li&gt;Collect actual metrics from CloudWatch, Cloud Logging, or your app servers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2 &lt;strong&gt;How long do we need searchable logs?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Example: 60 days retention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sometimes compliance requirements mandate 90–180 days (or more).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3 &lt;strong&gt;Where are our logs generated and stored today?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Example: logs stored in AWS S3.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;S3 is excellent as a cold storage layer: inexpensive, durable, and compression-friendly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you know volume + retention + source, you can make better architectural decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Separate Hot and Cold Storage
&lt;/h3&gt;

&lt;p&gt;Not all logs need to be instantly searchable. To save costs, split logs into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Hot storage:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Short retention (e.g., 7–14 days).&lt;/li&gt;
&lt;li&gt;Indexed and searchable in tools like OpenSearch, Elasticsearch, or Datadog.&lt;/li&gt;
&lt;li&gt;Used for active debugging, monitoring, and incident response.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Cold storage:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Long retention (e.g., 60–180 days).&lt;/li&gt;
&lt;li&gt;Stored cheaply in S3, Glacier, or equivalent.&lt;/li&gt;
&lt;li&gt;Only re-indexed when needed (via scripts or batch jobs).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This layered approach drastically reduces costs while still keeping historical data available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Indexing and Visualization
&lt;/h3&gt;

&lt;p&gt;S3 alone is not searchable—you need a system that can index logs and provide visualization. Options include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Open-source:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.elastic.co/" rel="noopener noreferrer"&gt;Elasticsearch / Kibana&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://opensearch.org/" rel="noopener noreferrer"&gt;OpenSearch / OpenSearch Dashboards&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://grafana.com/oss/loki/" rel="noopener noreferrer"&gt;Loki / Grafana&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Commercial SaaS&lt;/strong&gt; (faster to set up, but pricier):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Datadog Logs&lt;/li&gt;
&lt;li&gt;New Relic&lt;/li&gt;
&lt;li&gt;Splunk&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Each comes with trade-offs in scalability, query performance, and price. OpenSearch and Loki are great for cost-conscious teams, while Datadog and Splunk offer convenience at a premium.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Log Shipping (Data Ingestion Layer)
&lt;/h3&gt;

&lt;p&gt;To make logs available in your chosen tool, you need to ship logs from the source to the destination. Options include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;AWS-native:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Kinesis Data Firehose&lt;/strong&gt; – reliable, scalable, but sometimes overkill.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Lightweight log shippers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fluent Bit&lt;/strong&gt; – very fast, low resource usage, widely adopted.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector (by Datadog)&lt;/strong&gt; – simple config, good performance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filebeat&lt;/strong&gt; – part of the Elastic stack.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;These agents can pull logs from CloudWatch, Cloud Logging, or directly from application containers, and push them into OpenSearch, Elasticsearch, or a SaaS platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Rehydrating Logs from Cold Storage
&lt;/h3&gt;

&lt;p&gt;When older logs are needed (for audits, investigations, or post-mortems), you don’t want them indexed 24/7. Instead:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pull compressed logs from S3 (or Glacier).&lt;/li&gt;
&lt;li&gt;Run a reindexing script that ingests them back into OpenSearch or Elasticsearch temporarily.&lt;/li&gt;
&lt;li&gt;Once the investigation is done, drop them from hot storage again.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This “rehydration on demand” model ensures you balance cost efficiency with data availability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example Architecture
&lt;/h3&gt;

&lt;p&gt;Here’s how the pieces fit together:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Applications &amp;amp; Services generate logs.&lt;/li&gt;
&lt;li&gt;Fluent Bit / Vector agents collect and forward logs.&lt;/li&gt;
&lt;li&gt;Logs flow into OpenSearch (hot storage) with a retention of ~7 days.&lt;/li&gt;
&lt;li&gt;In parallel, logs are stored in S3 (cold storage) with a retention of 60–180 days.&lt;/li&gt;
&lt;li&gt;If old logs are needed, a rehydration script reindexes data from S3 into OpenSearch.&lt;/li&gt;
&lt;li&gt;Teams access logs securely via OpenSearch Dashboards / Kibana, without needing AWS or GCP console access.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Don’t pay for hot storage on all logs—split between hot and cold.&lt;/li&gt;
&lt;li&gt;Use lightweight shippers like Fluent Bit or Vector to control ingestion.&lt;/li&gt;
&lt;li&gt;Leverage S3 for retention—cheap, reliable, and compression-friendly.&lt;/li&gt;
&lt;li&gt;Provide access through log platforms, not cloud consoles—safer and easier for collaboration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By structuring your logging pipeline this way, you’ll achieve a balance between cost, performance, and security, while keeping your team efficient when troubleshooting issues.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>devops</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Using Docker + Traefik + WordPress on Hostinger VPS</title>
      <dc:creator>gu1lh3rm3_x</dc:creator>
      <pubDate>Mon, 08 Sep 2025 18:49:15 +0000</pubDate>
      <link>https://dev.to/gu1lh3rm3_x/using-docker-traefik-wordpress-on-hostinger-vps-1jfb</link>
      <guid>https://dev.to/gu1lh3rm3_x/using-docker-traefik-wordpress-on-hostinger-vps-1jfb</guid>
      <description>&lt;p&gt;Recently, a friend of mine came to me with an idea: he wanted a WordPress site where he could “upload” old console games (SNES, Game Boy, etc.) so people could play them directly from their browser—even on mobile.&lt;/p&gt;

&lt;p&gt;He already knew what to do on the WordPress side, but first he needed the right infrastructure to host everything.&lt;/p&gt;

&lt;p&gt;I told him:&lt;/p&gt;

&lt;p&gt;“Hey, I actually have a VPS at Hostinger. If you want, we can split the cost and I’ll set up WordPress there for you.”&lt;/p&gt;

&lt;p&gt;He agreed—and that’s where the fun began.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Running Multiple Projects on One VPS
&lt;/h3&gt;

&lt;p&gt;I knew I wanted the flexibility to run multiple projects on this VPS, not just WordPress. That meant I needed a way to isolate apps and keep them easy to manage.&lt;/p&gt;

&lt;p&gt;The answer: Docker.&lt;/p&gt;

&lt;p&gt;The idea was straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run WordPress in a container.&lt;/li&gt;
&lt;li&gt;Use MariaDB as the database.&lt;/li&gt;
&lt;li&gt;Store data in persistent Docker volumes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s my first docker-compose.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.9'

services:
  db:
    image: mariadb:10.11
    container_name: wordpress_db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: somepass
      MYSQL_DATABASE: somedatabase
      MYSQL_USER: someuser
      MYSQL_PASSWORD: somepassword
    volumes:
      - db_data:/var/lib/mysql

  wordpress:
    image: wordpress:latest
    container_name: wordpress_app
    depends_on:
      - db
    ports:
      - "8080:80"
    restart: always
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: someuser
      WORDPRESS_DB_PASSWORD: somepass
      WORDPRESS_DB_NAME: somedatabase
    volumes:
      - wordpress_data:/var/www/html

volumes:
  db_data:
  wordpress_data:

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It worked fine, but I didn’t want credentials hardcoded in YAML. So, I moved them into a .env file and updated the configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Making It Production-Ready with Traefik
&lt;/h3&gt;

&lt;p&gt;At this point, we had a functional WordPress setup—but it was only accessible via:&lt;br&gt;
&lt;code&gt;http://VPS_IP:8080&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That’s not practical. We needed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;HTTPS support with certificates.&lt;/li&gt;
&lt;li&gt;A reverse proxy to route requests to different containers (since we planned multiple projects).&lt;/li&gt;
&lt;li&gt;Proper DNS mapping.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Enter Traefik—an HTTP reverse proxy and ingress controller.&lt;/p&gt;

&lt;p&gt;The setup looked like this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluol3e7xbxeqb07ogmts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluol3e7xbxeqb07ogmts.png" alt="architecture idea with Traefik" width="525" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSL certificates via Let’s Encrypt&lt;/li&gt;
&lt;li&gt;Traefik as reverse proxy → routes traffic to WordPress&lt;/li&gt;
&lt;li&gt;MariaDB as the database&lt;/li&gt;
&lt;li&gt;Everything connected on the same Docker network&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the updated docker-compose.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.9'

services:
  traefik:
    image: traefik:v3.0
    container_name: traefik
    restart: always
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
      - "--entrypoints.web.http.redirections.entrypoint.scheme=https"
      - "--api.dashboard=true"
      - "--api.insecure=false"
      - "--certificatesresolvers.le.acme.email=youremail@example.com"
      - "--certificatesresolvers.le.acme.storage=/letsencrypt/acme.json"
      - "--certificatesresolvers.le.acme.tlschallenge=true"

    ports:
      - "80:80"
      - "443:443"
      - "8080:8080" # Dashboard (optional)
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "./letsencrypt:/letsencrypt"
      - "./traefik_logs:/var/log/traefik"
    networks:
      - wpnet

  db:
    image: mariadb:10.11
    container_name: wordpress_db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - db_data:/var/lib/mysql
    networks:
      - wpnet

  wordpress:
    image: wordpress:latest
    container_name: wordpress_app
    depends_on:
      - db
    restart: always
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: ${WORDPRESS_DB_USER}
      WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD}
      WORDPRESS_DB_NAME: ${WORDPRESS_DB_NAME}
      WORDPRESS_CONFIG_EXTRA: |
        define('WP_HOME','https://HOST_DNS.com');
        define('WP_SITEURL','https://HOST_DNS.com');
    volumes:
      - wordpress_data:/var/www/html
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.wordpress.rule=Host(`HOST_DNS.com`) || Host(`www.HOST_DNS.com`)"
      - "traefik.http.routers.wordpress.entrypoints=websecure"
      - "traefik.http.routers.wordpress.tls.certresolver=le"
    networks:
      - wpnet

volumes:
  db_data:
  wordpress_data:

networks:
  wpnet:
    driver: bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the site could be accessed via:&lt;br&gt;
&lt;code&gt;https://HOST_DNS.com&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Backups &amp;amp; Recovery
&lt;/h3&gt;

&lt;p&gt;A working site is great—but what about backups?&lt;/p&gt;

&lt;p&gt;I wrote a simple backup script with a cronjob that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Dumps the database&lt;/li&gt;
&lt;li&gt;Archives WordPress files&lt;/li&gt;
&lt;li&gt;Copies Traefik certificates&lt;/li&gt;
&lt;li&gt;Deletes backups older than 7 days&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I also created a restore script, which let me choose which parts to restore (DB, WordPress files, or certificates).&lt;/p&gt;

&lt;p&gt;This way, if anything breaks, I can recover quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Managing Secrets
&lt;/h3&gt;

&lt;p&gt;Right now, secrets live in a .env file, which isn’t ideal for production. The next step is to move them into a proper secrets manager (e.g., Docker secrets, HashiCorp Vault, or a cloud provider’s secret manager).&lt;/p&gt;

&lt;p&gt;That will make the setup more secure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;This little project started as “let’s host WordPress on a VPS” and turned into a practical exercise in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containerization&lt;/li&gt;
&lt;li&gt;Reverse proxying with Traefik&lt;/li&gt;
&lt;li&gt;SSL automation&lt;/li&gt;
&lt;li&gt;Backups &amp;amp; recovery&lt;/li&gt;
&lt;li&gt;Secrets management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It was a great hands-on way to think about infrastructure from scratch.&lt;/p&gt;

&lt;p&gt;If you found this helpful, drop a like, leave a comment, or share it with a friend who might enjoy it. 🚀&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automating YouTube Shorts with Python and AI</title>
      <dc:creator>gu1lh3rm3_x</dc:creator>
      <pubDate>Sat, 26 Jul 2025 03:30:21 +0000</pubDate>
      <link>https://dev.to/gu1lh3rm3_x/automating-youtube-shorts-with-python-and-ai-4i3</link>
      <guid>https://dev.to/gu1lh3rm3_x/automating-youtube-shorts-with-python-and-ai-4i3</guid>
      <description>&lt;p&gt;Once again, I found myself a bit bored — and when that happens, I usually end up building something random. After chatting with an AI for a while, I decided what my next mini project would be: automating the creation of short videos.&lt;/p&gt;

&lt;p&gt;The initial idea was simple:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Use AI to generate a short, curiosity-driven text

Generate an image related to the topic

Convert the text to speech using tools like gTTS or ElevenLabs

Combine everything into a short video
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;🛠️ First Attempt: Static Image + Audio&lt;/p&gt;

&lt;p&gt;Here’s the basic code that generates a short video from an image and an audio file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from moviepy import ImageClip, AudioFileClip, CompositeVideoClip

def create_video(image_path, audio_path):
    audio = AudioFileClip(audio_path)
    image = ImageClip(image_path).with_duration(audio.duration).resized(height=1280)
    image = image.with_position("center").with_audio(audio)

    video = CompositeVideoClip([image])
    video_path = "content/short.mp4"
    video.write_videofile(video_path, fps=24)
    return video_path

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result? It worked — but it was just a static image with background narration.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1jgwvngevo2mspx7vpra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1jgwvngevo2mspx7vpra.png" alt="result of the video" width="407" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI-generated text:&lt;/p&gt;

&lt;blockquote&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cleopatra lived closer in time to the invention of the iPhone than to the building of the Great Pyramid.
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;Image suggestion:&lt;/p&gt;

&lt;blockquote&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iPhone
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not bad for a first try — a basic but functional automated Shorts generator!&lt;/p&gt;

&lt;p&gt;📝 Adding Text to the Video&lt;/p&gt;

&lt;p&gt;Next, I wanted to overlay the generated text on top of the video. I ran into a small font issue, which I fixed by explicitly setting a font path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;font_path = '/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf'
if not os.path.exists(font_path):
    font_path = None  # fallback if not found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that, the video creation function evolved:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from moviepy import ImageClip, AudioFileClip, CompositeVideoClip, TextClip

def create_video(image_path, audio_path, text):
    audio = AudioFileClip(audio_path)
    image = ImageClip(image_path).with_duration(audio.duration).resized(height=1280)
    image = image.with_position("center").with_audio(audio)

    font_path = '/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf'
    if not os.path.exists(font_path):
        font_path = None

    txt_clip = TextClip(
        text=text,
        font=font_path,
        font_size=48,
        color='white'
    ).with_position('top').with_duration(audio.duration)

    video = CompositeVideoClip([image, txt_clip])
    video_path = "content/short.mp4"
    video.write_videofile(video_path, fps=24)
    return video_path

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we had videos with text overlays!&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5j66vvxv2mjzxt6barhm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5j66vvxv2mjzxt6barhm.png" alt="example of video with text on it" width="441" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Still not perfect — the text stayed static throughout the video — but progress nonetheless.&lt;/p&gt;

&lt;p&gt;🎬 Making Text Dynamic (Like Subtitles)&lt;/p&gt;

&lt;p&gt;I wanted the text to appear gradually, in sync with the narration. I decided to break the text into sentences and display each one sequentially. Here’s how I handled that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import re

# Split text into sentences
sentences = re.split(r'(?&amp;lt;=[.!?]) +', text)
n = len(sentences)
duration_per_sentence = audio.duration / n if n &amp;gt; 0 else audio.duration

subtitle_clips = []
for i, sentence in enumerate(sentences):
    start = i * duration_per_sentence
    end = start + duration_per_sentence
    subtitle = TextClip(
        text=sentence,
        font=font_path,
        font_size=20,
        color='black'
    ).with_position('center').with_start(start).with_duration(duration_per_sentence)
    subtitle_clips.append(subtitle)

video = CompositeVideoClip([image] + subtitle_clips)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result? A much more engaging video with properly timed subtitles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn19h5hnd9mw6qnuhtatg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn19h5hnd9mw6qnuhtatg.png" alt="example of video with correct subtitles" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🧠 What I’ve Learned So Far&lt;/p&gt;

&lt;p&gt;This mini project isn’t finished — but here’s what I’ve picked up along the way:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🎥 How to create videos in Python using moviepy

🗣️ How to convert text to speech with gTTS and ElevenLabs

🕒 How to sync subtitles with narration

🤖 How to integrate simple AI-generated content

🖼️ How to add multiple images in a slideshow format (WIP)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There’s still plenty of room to improve — syncing voice and subtitles more precisely, adding transitions, animations, or even background music — but this foundation already opens up a lot of possibilities.&lt;/p&gt;

&lt;p&gt;If you’re curious, the project is on GitHub:&lt;br&gt;
👉 &lt;a href="https://github.com/Guischweizer/shortomated" rel="noopener noreferrer"&gt;shortomated GitHub repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💭 Final Thoughts&lt;br&gt;
Automation is becoming increasingly accessible — especially with the help of AI. While this project isn't fully AI-powered, it demonstrates how combining tools like gTTS, Unsplash API, and moviepy can produce impressive results with relatively little effort.&lt;/p&gt;

&lt;p&gt;Hope you found this article useful or at least a little inspiring. Stay curious — and keep building!&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>automation</category>
      <category>videocreation</category>
    </item>
    <item>
      <title>How TDD Can Save You from Shipping Broken Code</title>
      <dc:creator>gu1lh3rm3_x</dc:creator>
      <pubDate>Thu, 10 Jul 2025 22:30:00 +0000</pubDate>
      <link>https://dev.to/gu1lh3rm3_x/how-tdd-can-save-you-from-shipping-broken-code-34i4</link>
      <guid>https://dev.to/gu1lh3rm3_x/how-tdd-can-save-you-from-shipping-broken-code-34i4</guid>
      <description>&lt;p&gt;Whenever we start thinking about testing our applications, the same questions always come up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How many tests are enough?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should I only write end-to-end tests?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Are unit tests enough?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What about the test pyramid — am I even doing it right?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Well, here’s the easy (and annoying) answer: it depends. But one thing is always true — when we have good tests, we have more robust systems.&lt;/p&gt;

&lt;p&gt;That’s a fact.&lt;/p&gt;

&lt;p&gt;You can’t rely solely on QA to catch everything. You can’t even rely on yourself. As developers, we naturally follow the “happy path” and often overlook edge cases. That’s why having automated tests in place is so crucial — they protect your system and give you immediate feedback when something breaks.&lt;/p&gt;

&lt;p&gt;🧱 Start Small, Start Smart&lt;br&gt;
You might be thinking:&lt;br&gt;
"I have a huge project, and we don’t even have the initial test setup correctly!"&lt;/p&gt;

&lt;p&gt;Don’t worry — I’ve been there. And the best way to get started is to start simple.&lt;/p&gt;

&lt;p&gt;Begin by defining which types of tests you want to introduce first. My suggestion? Start with small unit tests.&lt;/p&gt;

&lt;p&gt;Focus on functions that are heavily used across your codebase — maybe a validation function or a core utility. These are the areas where a single change can ripple through your entire application.&lt;/p&gt;

&lt;p&gt;Let’s say someone decides to refactor one of those key functions.&lt;br&gt;
Without tests, how will you know the behavior hasn’t changed? How long will it take before a user notices something’s broken — or worse, before production crashes?&lt;/p&gt;

&lt;p&gt;🚨 Tests Catch Problems Immediately&lt;br&gt;
Once you have a basic test setup and start covering the most critical parts of your system — and assuming you’ve integrated those tests into your CI/CD pipeline — something magical happens:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You get immediate feedback.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If something breaks, you’ll know right away. And when that happens, you can fix the issue before it reaches production, not after a flood of user complaints.&lt;/p&gt;

&lt;p&gt;🧠 TDD: Think Before You Build&lt;br&gt;
Now let’s talk about TDD — Test-Driven Development.&lt;/p&gt;

&lt;p&gt;TDD means writing your tests before you write the actual implementation. Yeah, I know — it sounds backwards. But there’s a reason why so many seasoned devs advocate for it.&lt;/p&gt;

&lt;p&gt;When you write tests first, you force your brain to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Think about edge cases before coding.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fully understand the task before jumping in.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consider how users might misuse or break your feature.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Anticipate security and data issues upfront.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, you’re not just writing code — you’re designing behavior.&lt;/p&gt;

&lt;p&gt;🤔 Should You Do TDD?&lt;br&gt;
That’s a tough one. I won’t give a blanket “yes,” because it depends on your project, your team, and your deadlines.&lt;/p&gt;

&lt;p&gt;But you should absolutely consider trying it.&lt;/p&gt;

&lt;p&gt;At first, it might feel unnecessary or slow. But as your system grows, you’ll start realizing something important:&lt;br&gt;
&lt;strong&gt;Every single test you wrote is holding your system together like glue.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even if you don’t start with full-on TDD, writing tests right after completing a task is still a huge win. You’ll build confidence and avoid regressions later.&lt;/p&gt;

&lt;p&gt;💬 Testing is Culture — But Also Survival&lt;br&gt;
In some companies, testing is seen as a cultural thing — something dev teams “believe in.” But it’s more than that.&lt;/p&gt;

&lt;p&gt;It’s a survival strategy for long-term projects. It’s what keeps your app stable as new features get added and refactors happen.&lt;/p&gt;

&lt;p&gt;Testing (and TDD in particular) isn’t just about code quality — it’s about peace of mind.&lt;/p&gt;

&lt;p&gt;🚀 Final Thoughts&lt;br&gt;
If you’ve never tried TDD before, give it a shot. Pick a small feature this week, and try writing the tests first. You might be surprised how much clarity it gives you — and how much time it saves you later.&lt;/p&gt;

&lt;p&gt;Thanks for reading — and happy testing!&lt;/p&gt;

</description>
      <category>testdrivendevelopment</category>
      <category>cleancode</category>
      <category>programming</category>
      <category>softwaretesting</category>
    </item>
    <item>
      <title>How I automated recon with LLMs + nmap?</title>
      <dc:creator>gu1lh3rm3_x</dc:creator>
      <pubDate>Wed, 09 Jul 2025 13:36:41 +0000</pubDate>
      <link>https://dev.to/gu1lh3rm3_x/how-i-automated-recon-with-llms-nmap-45g7</link>
      <guid>https://dev.to/gu1lh3rm3_x/how-i-automated-recon-with-llms-nmap-45g7</guid>
      <description>&lt;p&gt;Since diving into CTFs more seriously, I found myself stuck in the same loop:&lt;br&gt;
🔍 Run Nmap&lt;br&gt;
📄 Read the results&lt;br&gt;
🤖 Ask GPT for insights&lt;/p&gt;

&lt;p&gt;One day I thought:&lt;br&gt;
"Why not automate this?"&lt;br&gt;
What if I could create a tool that runs Nmap in the background and feeds the output directly into an AI agent?&lt;/p&gt;

&lt;p&gt;That was the spark. I started chatting with GPT to figure out how to approach it. My goal wasn’t to reinvent Nmap — I wanted to build on top of it, keeping all its power intact.&lt;/p&gt;

&lt;p&gt;I chose Python to build the CLI, keeping things simple. The flow looked like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Run Nmap with some default parameters&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Parse the results and organize them into a clean table&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Send a prompt with context to an LLM (I used Gemini because it's free 😄)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make an API call to Vulners to look up known vulnerabilities&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add a final touch: some ASCII art for fun 🎨&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that was it — I ended up with a tool that pulls recon data, enriches it with external sources, and asks an AI to help me interpret it.&lt;/p&gt;

&lt;p&gt;It was a super fun project, and I learned a lot by building it from scratch.&lt;/p&gt;

&lt;p&gt;🚀 Curious? Check it out at the link below!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Guischweizer/net-AINALYZER" rel="noopener noreferrer"&gt;https://github.com/Guischweizer/net-AINALYZER&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Why does observability matter?</title>
      <dc:creator>gu1lh3rm3_x</dc:creator>
      <pubDate>Fri, 04 Jul 2025 11:30:00 +0000</pubDate>
      <link>https://dev.to/gu1lh3rm3_x/why-does-observability-matter-199c</link>
      <guid>https://dev.to/gu1lh3rm3_x/why-does-observability-matter-199c</guid>
      <description>&lt;p&gt;When we think about software development, we often focus on features, code quality, and scalability — but we sometimes forget one of the most critical aspects: observability.&lt;/p&gt;

&lt;p&gt;Observability is the ability to measure the internal state of a system based on the data it produces — like logs, metrics, and traces. It’s what enables us to answer vital questions about our systems without shipping new code or SSH-ing into servers.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How much CPU and memory are our services using?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which endpoints are consuming the most resources?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Are we handling traffic efficiently, or is our system under strain?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Could we scale down and save money, or do we need to prepare for 3x the load?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But observability isn't just about cool dashboards and fancy graphs. It's about proactive control. It's about being alerted before something goes wrong — not after users are already complaining.&lt;/p&gt;

&lt;p&gt;By setting up proper alerts (e.g., high CPU, memory usage, request latency), you can catch problems early and react before incidents escalate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftn46lkv96i5uufvsvwuc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftn46lkv96i5uufvsvwuc.png" alt="Simple Diagram Example" width="716" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tools like Prometheus and Grafana make it easier to collect and visualize metrics from all your services, giving you a centralized view of your infrastructure's health. Once you have that in place, it opens up a whole new layer of responsibility: incident response and recovery plans. What happens if one service fails? Are we prepared to handle it?&lt;/p&gt;

&lt;p&gt;Observability is not optional for systems with real users and business impact — it's a necessity. It helps you move from a reactive to a proactive mindset.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How would I scale a project</title>
      <dc:creator>gu1lh3rm3_x</dc:creator>
      <pubDate>Thu, 03 Jul 2025 10:30:00 +0000</pubDate>
      <link>https://dev.to/gu1lh3rm3_x/how-would-i-scale-a-project-48d3</link>
      <guid>https://dev.to/gu1lh3rm3_x/how-would-i-scale-a-project-48d3</guid>
      <description>&lt;p&gt;Scalability is one of the most important aspects of software development — but what does it really mean?&lt;/p&gt;

&lt;p&gt;The answer often depends on the project you're working on. For example, let’s say you're building a small e-commerce site for a local business. How much does it really need to scale? Is it necessary to handle millions of requests per minute? Probably not.&lt;/p&gt;

&lt;p&gt;But now imagine you're working on a platform like Facebook or Google — you're dealing with millions of simultaneous requests, users, services, and data. At this level, your system must be prepared to scale reliably and efficiently.&lt;/p&gt;

&lt;p&gt;Start Simple&lt;br&gt;
Below is a very simple example of a common architecture pattern:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzf7fptkc7kcq7ancxyb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgzf7fptkc7kcq7ancxyb.png" alt="Simple architecture image" width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This represents a basic flow: a client sends a request to an API, which processes business logic, validates data, and interacts with the database.&lt;/p&gt;

&lt;p&gt;This kind of architecture is fine for small or personal projects — it's easy to understand, quick to deploy, and sufficient for limited traffic.&lt;/p&gt;

&lt;p&gt;Scaling Up: What Changes?&lt;br&gt;
When we move toward a robust and scalable architecture, several new concerns come into play — especially around availability, performance, and security.&lt;/p&gt;

&lt;p&gt;Some critical components to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Load Balancer – Distributes traffic across multiple servers to prevent overload.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cache – Reduces database load by storing frequent responses (e.g., Redis, CDN).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Observability – Monitoring, logging, and tracing to understand system behavior.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security – Protecting data and services at all levels of the stack.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Latency – Physical server location matters. Regional distribution reduces delay.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Availability – Ensuring the system is always accessible through redundancy and backups.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A More Scalable Architecture&lt;br&gt;
Here's what a more advanced setup might look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe52fcacdx7k7zsn9cjt3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe52fcacdx7k7zsn9cjt3.png" alt="Complex architecture image" width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With more components in place, the complexity increases significantly. You’ll need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Analyze request times and performance bottlenecks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify content that can be cached to reduce database hits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure databases have replication and automated backups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up monitoring systems to catch failures before users notice them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Plan for failure: a good architecture expects components to break and handles it gracefully.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But Here's the Catch: Trade-Offs&lt;br&gt;
Scaling a project isn't just a technical challenge — it's also about managing compromises. Every decision adds complexity, cost, or both.&lt;/p&gt;

&lt;p&gt;Here are some common trade-offs:&lt;/p&gt;

&lt;p&gt;🧠 Complexity vs Simplicity:&lt;br&gt;
A scalable system is harder to build, test, and understand. More moving parts = more potential points of failure.&lt;/p&gt;

&lt;p&gt;💸 Performance vs Cost:&lt;br&gt;
High availability, redundancy, and distributed regions all add cloud costs. You need to ask: Is it worth it at my current scale?&lt;/p&gt;

&lt;p&gt;🧪 Speed vs Reliability:&lt;br&gt;
Introducing caching or async processing improves speed but can introduce eventual consistency issues.&lt;/p&gt;

&lt;p&gt;🔐 Access vs Security:&lt;br&gt;
Scaling often means exposing more APIs, services, and endpoints — all of which need proper access control and protection.&lt;/p&gt;

&lt;p&gt;These trade-offs mean that scaling should be intentional, not automatic. Just because you can build a system like Netflix doesn’t mean you should — especially if your needs are much simpler.&lt;/p&gt;

&lt;p&gt;Final Thoughts&lt;/p&gt;

&lt;p&gt;This is just a glimpse of how quickly software systems can become complex as they grow. Starting simple is fine — but as your application gains users and responsibilities, so must your architecture evolve.&lt;/p&gt;

&lt;p&gt;Next time you’re starting a project, ask yourself:&lt;/p&gt;

&lt;p&gt;“What would it take for this to scale 10x? 100x?”&lt;/p&gt;

&lt;p&gt;And more importantly:&lt;/p&gt;

&lt;p&gt;“What am I willing to trade to make that happen?”&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Should You Create Your Own Tools?</title>
      <dc:creator>gu1lh3rm3_x</dc:creator>
      <pubDate>Wed, 02 Jul 2025 21:30:00 +0000</pubDate>
      <link>https://dev.to/gu1lh3rm3_x/should-you-create-your-own-tools-2np</link>
      <guid>https://dev.to/gu1lh3rm3_x/should-you-create-your-own-tools-2np</guid>
      <description>&lt;p&gt;As developers, engineers, or hackers, we often find ourselves surrounded by a vast array of existing tools. Most of the time, it makes perfect sense to use what’s already available — it saves time, and it's battle-tested.&lt;/p&gt;

&lt;p&gt;But what if you're working on a personal project?&lt;br&gt;
Why should you rely only on what’s already built?&lt;br&gt;
Why not create something of your own — tailored to your needs?&lt;/p&gt;

&lt;p&gt;Let’s be clear: if you're working on a company project where time is critical, building a tool from scratch is usually not the best option. The cost of reinventing the wheel might be too high.&lt;br&gt;
But when you're on your own time, learning and experimenting, building your own tool can be one of the most valuable things you can do.&lt;/p&gt;

&lt;p&gt;We’ve become so focused on efficiency and productivity that we often forget the fun part of being a developer: the act of creating.&lt;/p&gt;

&lt;p&gt;When you build something from scratch, you’re forced to understand the problem it solves deeply. You’re also free to pick the language, framework, and approach — and that’s where true learning happens. You’ll build your curiosity, grow your creativity, and maybe even come up with something useful for others.&lt;/p&gt;

&lt;p&gt;So next time you want to learn a new language or skill, don’t just build a "hello world" or basic CRUD.&lt;br&gt;
Build something you think is cool.&lt;br&gt;
And don't be afraid to ask for help — from a friend, a mentor, or even AI like ChatGPT.&lt;/p&gt;

&lt;p&gt;Make things. Break things. Learn things. That’s what being a developer is all about.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
