DEV Community

Cover image for I Was Building a Cloud Video Service. YouTube Turned Me Into an IP Trafficker.
Maxim Osovsky
Maxim Osovsky

Posted on

I Was Building a Cloud Video Service. YouTube Turned Me Into an IP Trafficker.

The Irony

I was building a simple cloud service. Paste YouTube links, server merges them, you get a video. Innocent. Wholesome. The kind of project you'd show your mom.

Then YouTube said no.

Not because I was doing anything wrong. Not because I violated their API terms. But because my server's IP address came from a datacenter instead of someone's apartment.

So now, to make this work, I'll have to buy residential IP traffic from a proxy provider, route my downloads through random households, and pray that YouTube's bot detection doesn't notice. My "simple video merging service" is quietly becoming a proxy arbitrage operation that will live in the gray zone of YouTube's Terms of Service.

This is that story.


Context: What I'm Building

Merge Video is an open-source service I'm building to merge video files in the cloud. The idea: you send links or upload files, go to sleep, and wake up to a merged video on YouTube with a link in your email.

I film long strategy sessions — 6 to 8 hours each. My camera records in 20-minute segments. After each session I have dozens of files that need to be joined. I spent over $1,500 outsourcing this before building the service myself.

Maxim Osovsky during a strategy session

Here's the thing: after a session, I rarely have time to edit. So I upload the raw segments straight to YouTube — unmerged, unedited — just to free up disk space. YouTube becomes my storage. Over the years I've accumulated hundreds of raw clips organized into playlists, each playlist representing one session that still needs to be merged into a single video for the archive.

That's why the "YouTube URLs" mode exists. It will let you paste a playlist link and the server will download all the clips, merge them, and upload the result back as one video. It worked perfectly on localhost. On the cloud server:

ERROR: Sign in to confirm you're not a bot.
Enter fullscreen mode Exit fullscreen mode

In Part 1, I solved the ffmpeg bugs. In Part 4, I solved cookie extraction. This is Part 5: getting downloads to actually work from a server.


The Real Problem: It's the IP, Not the Cookies

After extracting cookies (see Part 4), I uploaded cookies.txt to the server. Same yt-dlp command that worked on localhost:

yt-dlp --cookies cookies.txt -f "best[ext=mp4]" "https://youtu.be/VIDEO_ID"
Enter fullscreen mode Exit fullscreen mode

Result on localhost: ✅ Downloaded in 30 seconds

Result on cloud server: ❌ Same bot detection error

The cookies were valid. The command was identical. The only difference: the IP address. My laptop has a residential IP from my ISP. The server has a datacenter IP from Alibaba Cloud.

YouTube doesn't just check cookies. It checks where the request comes from. Datacenter IPs are flagged automatically — no amount of cookies, headers, or user-agent strings will help.


7 Methods Tested

I spent two days testing every viable approach. Here's the full scorecard:

# Method How it works Result
1 yt-dlp + cookies Direct download with browser cookies ❌ IP blocked
2 Cobalt API (public) Third-party download API ❌ Requires Turnstile CAPTCHA
3 Self-hosted Cobalt Run your own Cobalt instance ❌ Signature decipher broken
4 RapidAPI YouTube downloaders Commercial download APIs ❌ Rate-limited, unreliable
5 SaveFrom / similar services Web-based download tools ❌ Quality limits, no API
6 YouTube Data API direct download Google's official API ❌ No download endpoint exists
7 Residential proxy + yt-dlp Route traffic through residential IPs Works

Every method that tries to download from a datacenter IP fails. The only solution is to not have a datacenter IP — or to commit to the charade of pretending you don't.

Let me walk you through the graveyard.


Method 1: yt-dlp + Cookies (Failed)

The setup from Part 2. Cookies extracted from Firefox, uploaded to server, yt-dlp configured:

yt-dlp --cookies cookies.txt --js-runtimes node \
  -f "best[ext=mp4]" "https://youtu.be/VIDEO_ID"
Enter fullscreen mode Exit fullscreen mode

On localhost with a residential IP: instant download. On the cloud server:

ERROR: Sign in to confirm you're not a bot.
Enter fullscreen mode Exit fullscreen mode

Diagnosis: YouTube allows or blocks based on IP reputation before even checking cookies. If the IP is flagged as datacenter, the request is rejected at the network level.


Method 2: Cobalt API (Failed)

Cobalt is a popular open-source media downloader. Their public API at api.cobalt.tools seemed perfect — offload the download to their infrastructure.

curl -X POST "https://api.cobalt.tools" \
  -H "Content-Type: application/json" \
  -d '{"url": "https://youtu.be/VIDEO_ID"}'
Enter fullscreen mode Exit fullscreen mode

Result: error.api.auth.turnstile.missing

Cobalt added Cloudflare Turnstile CAPTCHA to prevent automated API abuse. Every request now requires solving a browser challenge — impossible from a server script.


Method 3: Self-Hosted Cobalt (Failed)

If the public API requires CAPTCHA, run your own instance. I deployed Cobalt via Docker on the server:

services:
  cobalt-api:
    image: ghcr.io/imputnet/cobalt:10
    ports:
      - "9000:9000"
    environment:
      API_AUTH_REQUIRED: "false"
Enter fullscreen mode Exit fullscreen mode

The API responded, accepted URLs, and started processing. Then:

error: ErrorCantProcess - couldn't process your request
Enter fullscreen mode Exit fullscreen mode

Diagnosis: Cobalt uses its own YouTube signature deciphering algorithm. YouTube frequently changes their obfuscation, and the self-hosted version had a stale decipher implementation. The maintainers patch the public instance frequently, but a self-hosted snapshot falls behind within days.


Methods 4–6: Commercial APIs and Direct Downloads (All Failed)

Method Why it failed
RapidAPI downloaders Rate limits, intermittent 403s, quality caps. Most are wrappers around the same broken approaches
SaveFrom / y2mate / similar Designed for browser use. No reliable API, quality limits, heavy ads, services go down often
YouTube Data API Google's official API has endpoints for metadata, search, and uploads — but no download endpoint. By design.

None of these solve the core problem: YouTube blocks server-originating requests. The entire ecosystem of "YouTube downloader" services is a game of whack-a-mole against Google's engineering team. And Google has more engineers.


Method 7: Residential Proxy ✅

The only approach that worked. And the one I was hoping to avoid.

To download a YouTube video from a cloud server, you need to pretend your server is someone's laptop in a suburban apartment. That's what residential proxies do. That's what I now have to pay for monthly. Welcome to the internet.

What's a residential proxy?

Instead of your request going directly from the server (datacenter IP) to YouTube, it goes through a proxy server that uses an IP assigned by a real ISP to a real household. YouTube sees a residential IP and treats it as a normal user.

Server (datacenter IP) → Proxy (residential IP) → YouTube
Enter fullscreen mode Exit fullscreen mode

Setup

I chose Decodo for residential proxies — they support SOCKS5, have 115M+ IPs across 195+ countries, and offer pay-per-GB pricing.

Backend configuration:

# config.py
PROXY_URL = os.environ.get("PROXY_URL", "")

# video.py — in both expand_urls() and download_videos()
cmd = ["yt-dlp", "--cookies", str(COOKIES_FILE),
       "--js-runtimes", "node",
       "-f", "best[ext=mp4]"]

if PROXY_URL:
    cmd.extend(["--proxy", PROXY_URL])
Enter fullscreen mode Exit fullscreen mode

Environment:

# .env on the server
PROXY_URL=socks5h://user-USERNAME-session-1:PASSWORD@gate.proxy-provider.com:7000
Enter fullscreen mode Exit fullscreen mode

Dependencies:

pip install pysocks  # Required for SOCKS5 proxy support
Enter fullscreen mode Exit fullscreen mode

PySocks is required for yt-dlp to work with SOCKS5 proxies.

First successful test

yt-dlp --proxy "socks5h://user-USERNAME-session-1:PASSWORD@gate.proxy-provider.com:7000" \
  --cookies cookies.txt --js-runtimes node \
  -f "best[ext=mp4]" "https://youtu.be/VIDEO_ID"
Enter fullscreen mode Exit fullscreen mode
[youtube] Extracting URL: https://youtu.be/VIDEO_ID
[youtube] VIDEO_ID: Downloading webpage
[youtube] VIDEO_ID: Downloading ios player API JSON
[download] Destination: video.mp4
[download] 100% of 403.12MiB in 00:02:15
Enter fullscreen mode Exit fullscreen mode

It worked. Same server, same datacenter IP, same cookies. The only change: traffic routed through a residential proxy.


Cost Analysis: Congratulations, You're a Bandwidth Broker

Here's where the absurdity becomes quantifiable. I wanted to build a free, open-source tool. Instead I'll have a variable cost per download that scales with video length. Every time someone pastes a YouTube link into my service, I'll have to purchase residential bandwidth from a proxy provider and route it through some stranger's home internet connection.

Let that sink in.

Item Cost
Residential proxy plan (2 GB) $12/month
Pay-as-you-go rate $3.50/GB
Typical video download (17 min, 1080p) ~400 MB
Two videos merged (one E2E test) ~870 MB
Estimated downloads per 2 GB plan ~5 video pairs

Cost per merge job

At $12 for 2 GB: roughly $2.40 per merge (assuming two ~400 MB videos).

Compare with my old approach: $30–50 per merge outsourced to a human editor.

The economics work, but it means my "free self-hosted service" will have a variable cost per download that scales with video size. Every time a user pastes a YouTube link, I'll be buying residential IP traffic.

Scaling considerations

Plan $/GB Monthly cost Videos (~400 MB each)
2 GB $6.00 $12 5
8 GB $1.80 $14.50 20
25 GB $1.70 $42.90 62
Pay-as-you-go $3.50 Variable Variable

For personal use (5–10 merges/month), the 2 GB plan is fine. For a public service, proxy costs will become the dominant expense — not compute, not storage, not bandwidth. The most expensive part of my video merging service will be pretending to be a regular person on the internet.

I started this project to save $1,500/year on video editing outsourcing. Now I'll be spending $144/year on residential IP traffic just to download files that are publicly available on YouTube. The economics work, but the irony is thick enough to cut.


The Strategic Lesson

YouTube's bot detection operates at two different levels:

Level What's checked Solution
Application Cookies, headers, user-agent, JS challenges Cookies + --js-runtimes node
Network IP reputation (datacenter vs. residential) Residential proxy

I spent Part 4 solving the application layer — extracting cookies, configuring JavaScript runtimes. All necessary, but not sufficient.

The network layer is the harder barrier. YouTube maintains IP reputation databases. Datacenter IP ranges from major cloud providers (AWS, GCP, Azure, Alibaba, etc.) are automatically flagged. No amount of application-layer configuration changes this.

Can you avoid proxies?

Surely there's a way to not pay strangers for their IP addresses?

Alternative Feasibility
Run the server from home (residential IP) ✅ Works, but congrats — you've un-invented the cloud
Use a VPS from a residential ISP ⚠️ Rare, expensive, may still get flagged
Set up a VPN tunnel to your home network ⚠️ Works but adds latency, depends on home uplink
Download locally, upload to server, merge ⚠️ Works but defeats the purpose of cloud processing

For a production service, residential proxies are the pragmatic choice. They're the only way to reliably download from YouTube on cloud infrastructure.

So yes — to run a cloud video service that touches YouTube, you must pay a monthly fee to disguise your cloud server as a home computer. The cloud, pretending not to be the cloud. Peak 2026.


Update: It Works

While writing this article, I ran the first real end-to-end test. Two strategy session clips — 17 minutes and 26 minutes — pasted as a YouTube playlist link into my own service.

The proxy downloaded 870 MB. The server spent about an hour normalizing and merging (it's a small 2 vCPU instance). Then it uploaded the result to YouTube and sent me an email: 🎬 Your merged video is ready!

Cost of this single merge: roughly $6 in residential proxy traffic.

I started this project to avoid paying $30–50 per merge to a human editor. At $6 per merge, the math still works. But I never imagined that the hardest part of building a video merging service would be convincing YouTube that my server is someone's laptop.


This is Part 5 of a series about building Merge Video. Previous: 6 Ways to Get YouTube Cookies — Only 1 Works.

Building in public, one utility at a time. Follow the journey: LinkedIn · GitHub

Top comments (1)

Collapse
 
kamalmost profile image
KamalMostafa

Thanks for sharing.