The deadline to protect your team’s data is March 31, 2026.
If you logged into Vercel this morning, you likely saw a high-polish popup titled "Enabling Agentic Infrastructure." While the marketing focuses on the "Self-Driving Cloud," the underlying change to data handling is something every Principal Engineer and CTO needs to audit immediately.
Vercel is moving from being a passive host to an active agent. This requires "context," and that context is your source code.
The Fine Print: Default Opt-Ins
Vercel has updated its infrastructure to allow for AI Model Training. According to the new notice, Vercel is now using project code and agent chats to train their models and—critically—sharing that data with third-party AI providers.
The Risk Profile:
- Hobby and Trial Pro plans: These appear to be OPTED-IN by default.
- Pro and Enterprise plans: While theoretically opted-out, we are seeing inconsistent toggles across different organization dashboards. Do not assume your tier protects you.
Why This is a Business Concern
While Vercel states that PII, environment variables, and API keys are redacted, "anonymized" code still poses significant risks:
- Intellectual Property Leakage: Even without secrets, your architectural patterns, proprietary algorithms, and internal logic are now being fed into a collective "brain" that serves your competitors.
- Compliance Failures: If your company operates under SOC2, HIPAA, or GDPR, "opting in" to share data with unnamed third-party AI trainers may directly violate your existing Data Processing Agreements (DPAs).
- The Ingestion Deadline: The cutoff is March 31, 2026. If you opt out after this date, future data is protected, but the data already ingested into the training set cannot be "un-learned."
What "Agentic Infrastructure" Actually Does
Vercel is building a platform that can:
- Proactively investigate incidents by analyzing logs and source code in real time.
- Open PRs automatically to optimize cloud spend or fix performance regressions.
- Act as an AI Concierge that allows your team to "chat" with your entire codebase.
These are powerful features, but they come at the cost of data sovereignty. For many enterprise teams, the trade-off isn't worth it.
The 60-Second Security Audit
If you manage a professional team, take these steps before the end of the day:
-
Locate the Toggle: Navigate to
Team Settings>Data Preferences. - Verify the Setting: Look for "Improve models with my data."
- Toggle OFF: Unless you have explicit clearance from your legal or security team to contribute proprietary code to third-party AI models, switch this to off.
Final Perspective
We are entering an era where infrastructure providers are hungry for data to fuel their agents. Innovation is great, but as engineers, our first job is to protect the integrity and privacy of the systems we build.
Check your settings before March 31st.

Top comments (0)