Privacy Policies Are Legal Documents, Not Transparency Documents
Here's something most developers intuitively understand but rarely apply to their tool choices: a privacy policy is written by lawyers to protect the company, not to protect you.
When a SaaS tool says "we take your privacy seriously," what they actually mean is: "we've written a document that limits our legal liability while preserving maximum operational flexibility." That's not cynicism — it's how legal documents work.
For most software, this is fine. You accept the trade-off and move on. But video tools are different, and the reason comes down to what kind of data they process.
Video Isn't Just Data — It's Biometric Data
Under GDPR, video recordings that capture people's faces constitute biometric data. Article 9 classifies this as a special category requiring elevated protection. This isn't a technicality — it reflects a genuine difference in sensitivity between, say, a text document and a video of someone's face.
Every product demo where the presenter has their camera on, every async standup, every onboarding video — these all contain biometric information. The tool processing and storing those recordings is handling one of the most sensitive categories of personal data that GDPR recognizes.
Now ask yourself: for data this sensitive, is a privacy policy really sufficient assurance?
What Privacy Policies Actually Permit
If you read the privacy policies of major video tools carefully (and most people don't), you'll find patterns that should give any security-conscious team pause:
Broad usage rights. Language that permits the provider to use your content for "improving services," "developing new features," or "analytics." These phrases can cover almost anything, including feeding recordings into machine learning pipelines.
Unilateral modification. Most policies reserve the right to change terms at any time, often with nothing more than a notice posted on a website. Your consent today may cover very different practices tomorrow.
Third-party data sharing. Subprocessors, analytics providers, infrastructure partners — your video data often passes through multiple entities, each with their own policies and jurisdictional exposure.
Ambiguous retention. Deleting a video from the UI doesn't necessarily mean it's deleted from all storage layers, backups, and processing pipelines. Policies are often deliberately vague on this point.
None of this is unusual or illegal. It's standard SaaS practice. But standard SaaS practice was designed for text data and form submissions, not for biometric recordings of your team.
Open Source Shifts the Model from Trust to Verification
This is where open source fundamentally changes the equation. Instead of reading legal language and hoping it maps to reality, you can read the actual code.
With an open-source video tool, a security team can audit exactly what matters:
- How are recordings stored? Encrypted at rest? What algorithm? You can check.
- What happens during processing? Any intermediate copies? Temporary files? You can trace the code path.
- Does deletion actually delete? Or does it soft-delete and retain data? Read the handler.
- Is there telemetry? What gets phoned home, if anything? Grep the codebase.
- Are recordings used for ML training? You'll see it in the code — or you won't, because it's not there.
This is Kerckhoffs's principle applied beyond cryptography. A system's security should depend on its design, not on the secrecy of its implementation. If a video tool can only protect your privacy by hiding how it works, that's not privacy — it's obscurity.
Self-Hosting Eliminates Jurisdictional Questions Entirely
Open source enables something even more powerful than code auditing: self-hosting.
When you run the video platform on your own infrastructure, the entire category of jurisdictional risk disappears. There's no third-party provider to receive a CLOUD Act request. There's no subprocessor chain to evaluate. There's no DPA to negotiate. Your data sits on servers you control, in a jurisdiction you choose, governed by laws you understand.
For teams in regulated industries — healthcare, finance, legal, government — this isn't a nice-to-have. It's often the only way to meet compliance requirements without an army of lawyers continuously evaluating vendor risk.
What This Looks Like in Practice
SendRec is built on this principle. The entire platform is AGPLv3 — every line of backend and frontend code is public and auditable.
Here's what you can verify yourself:
- No telemetry. No analytics beacons, no usage tracking sent to us. Zero phone-home behavior.
- No ML training. Recordings are stored and served. That's it. No processing pipelines.
- Real deletion. When you delete a video, the file is removed from object storage. Check the handler — it's straightforward.
- Minimal data collection. Email, name, and your recordings. No behavioral tracking, no session recording, no fingerprinting.
-
EU infrastructure. The managed version runs on Hetzner in Germany. But you can run it anywhere — it's a
docker-compose.ymland you're done.
You don't need to take our word for any of this. That's the whole point.
The Dev.to Angle
As developers, we have an unusual advantage here. We can actually read the source code. We can evaluate privacy claims against implementation. We can make informed decisions that non-technical teams can't.
If you're the person your team looks to for tooling decisions, consider applying the same rigor to your video tool that you apply to your dependencies. Read the code. Check the network requests. Understand the data flow.
And if the source isn't available for you to read, that itself tells you something.
Get Started
- Try SendRec free: app.sendrec.eu
- Read the code: github.com/sendrec/sendrec
-
Self-host: Clone,
docker-compose up, done.
Your team's video recordings deserve the same transparency standard you apply to the rest of your stack.
Top comments (0)