DEV Community

Cover image for GitHub Copilot is Training on Your Code; Opt Out Before April 24 or Lose the Choice
VIKAS
VIKAS

Posted on

GitHub Copilot is Training on Your Code; Opt Out Before April 24 or Lose the Choice

⚠️ Deadline: April 24, 2026. After this date, opting out becomes significantly harder and some data collection may be retroactively applied. This takes 90 seconds to fix. Don't skip it.


I almost missed this.

Buried inside a GitHub policy update — the kind most developers scroll past while grabbing their morning chai — was a line that made me stop cold:

"Your code interactions with GitHub Copilot may be used to improve our AI models unless you explicitly opt out."

Not a notification. Not a banner. A policy page update.

And the opt-out window? April 24.

I'm a MERN developer. I build client apps, SaaS platforms, internal tools. Some of that code is under NDA. Some of it has business logic my clients paid good money to develop. The idea that it could silently feed a training pipeline — without my explicit yes — is not okay.

Here's everything you need to know.


What Actually Changed

GitHub updated its Copilot Terms of Service to expand the scope of what "telemetry" means. Previously, only code suggestions you accepted were potentially logged. The updated policy covers:

  • Code context sent to Copilot for suggestion generation
  • Your accept/reject patterns on suggestions
  • The surrounding file content used as context window input
  • Chat interactions in Copilot Chat

This applies to individual accounts, organization accounts, and — critically — private repositories unless you've explicitly disabled the relevant settings.

The training opt-out specifically refers to whether GitHub can use this telemetry to improve future Copilot models.


Why Developers Are Missing This

Three reasons:

1. Policy fatigue. We accept ToS updates constantly. GitHub sent this as a policy changelog notification, not a product alert. Most devs dismissed it.

2. Default is opt-in. GitHub set the default state to allow training. You have to take action to stop it. This is a common dark pattern — and it works.

3. The deadline isn't obvious. The April 24 date doesn't appear in the notification subject line. You have to read the full policy to find it.


What's Actually at Risk

Let's be specific. If you're a freelancer or studio (like I am at Zyklabs), your risk profile includes:

What's exposed Why it matters
Client business logic Competitive IP, NDA violations
API keys in context If accidentally included in a prompt
Database schemas Architecture patterns, table structures
Auth flows Security-relevant implementation details
Unreleased feature code Product roadmap exposure

For solo devs building personal projects, the stakes are lower. But for anyone doing client work or building a product — this matters.


How to Opt Out (Individual Account)

This takes under 2 minutes.

Step 1: Go to your GitHub Settings

github.com → Your Avatar (top right) → Settings
Enter fullscreen mode Exit fullscreen mode

Step 2: Navigate to Copilot settings

Settings → Copilot (left sidebar) → Features → Privacy 
Enter fullscreen mode Exit fullscreen mode

Step 3: Find "Allow GitHub to use my code snippets for product improvements"

Set this to Disabled.

Step 4: Find "Allow GitHub to use my data for AI model training"

Set this to Disabled.

Step 5: Save changes.

Done. Your future interactions are excluded from training data.


How to Opt Out (Organization Admin)

If you manage an org — and your team is using Copilot Business or Enterprise — you need to do this at the org level too. Individual settings don't override org-level defaults.

Step 1: Go to your Organization Settings

github.com → Your Org → Settings → Copilot
Enter fullscreen mode Exit fullscreen mode

Step 2: Under "Policies", find the AI training permission toggles

Step 3: Set both training-related options to No Policy (gives members control) or Disabled (forces opt-out for everyone)

Step 4: Communicate this to your team. Individual devs may still have their personal settings active.


What About Code Already Sent?

Here's the uncomfortable part: GitHub's policy doesn't clearly specify whether opting out removes previously collected telemetry from training pipelines. The language around data retention is vague.

What opting out does guarantee: no new data from your account will be used after the opt-out is processed.

For retroactive deletion, you'd need to submit a formal data deletion request under GDPR (if you're in a qualifying region) or GitHub's privacy request process. That's a separate step.


Should You Stop Using Copilot?

No. That's not what this is about.

Copilot is still a genuinely useful tool. I use it. Most of my team uses it. The issue isn't the product — it's the default being opt-in for training, without a clear user-facing alert.

The fix is simple: opt out of training, keep using the tool.

What you're opting out of is your code being used as training data for future model versions. Copilot still works exactly the same way after you do this. Your suggestions aren't affected. Your Copilot Chat still works.


The Bigger Pattern to Watch

This isn't unique to GitHub. Over the past 18 months, we've seen similar moves from:

  • JetBrains AI Assistant — opt-out for telemetry added post-launch
  • VS Code + Microsoft Copilot — training data scope quietly expanded
  • npm (the axios incident) — supply chain trust is eroding across the board

The pattern: launch a developer tool with generous defaults, quietly expand data collection scope in a policy update, set opt-out as the non-default path.

If you're a developer in 2026, reviewing the privacy/training settings of every AI tool in your workflow is now maintenance work. Add it to your quarterly checklist.


TL;DR — Do This Right Now

  1. Go to github.com → Settings → Copilot → Policies
  2. Disable "Use my code for product improvements"
  3. Disable "Use my Copilot interactions for AI training"
  4. If you're an org admin: do the same at org level
  5. Deadline: April 24, 2026

90 seconds. Go.


Building web apps and SaaS products at Zyklabs. I write about real security issues, MERN development, and things that actually affect how we build.

Found this useful? Drop a comment — I'm especially curious if anyone has tried the formal data deletion route and what that process looked like.

Top comments (0)