Atlassian Enables Default Data Collection to Train AI
Meta Description: Atlassian enables default data collection to train AI models—here's what it means for your Jira and Confluence data, your privacy rights, and what to do right now.
TL;DR: Atlassian quietly updated its privacy and data practices to enable data collection from customer accounts by default, using that data to train its AI systems (including Atlassian Intelligence). Users and admins were automatically opted in. If you're a Jira, Confluence, or Trello user—especially in a business or enterprise context—you need to know what data is being collected, whether you can opt out, and what the legal implications might be for your organization.
Key Takeaways
- Atlassian enables default data collection to train AI across its product suite, including Jira, Confluence, and Trello
- Customers are opted in by default, meaning inaction = consent under Atlassian's current framework
- Admins can opt out, but the process isn't prominently advertised
- Data used may include project names, issue content, comments, and page content from Confluence
- This raises real concerns for organizations in regulated industries (healthcare, finance, legal)
- Atlassian's move mirrors similar decisions by other SaaS giants, but the lack of proactive communication has frustrated enterprise customers
- You should audit your Atlassian settings today if data privacy is a concern
What Happened: Atlassian Enables Default Data Collection to Train AI
In a move that flew under the radar for many IT administrators, Atlassian updated its data use practices to allow the company to collect and use customer data to train its artificial intelligence models. The change, which affects the company's flagship products—Jira Software, Jira Service Management, Confluence, and Trello—means that unless you actively opt out, your organization's data is being used to improve Atlassian Intelligence and other AI-powered features.
This isn't a hypothetical future risk. Atlassian Intelligence, the company's AI layer built on top of large language models, is already embedded across the product suite. It powers features like smart summaries in Confluence, AI-assisted ticket writing in Jira, and virtual service agents in Jira Service Management. The data collection policy directly feeds the training pipelines that make these features smarter over time.
The announcement was made through a policy update rather than a prominent product notification, which is why many organizations only found out through third-party reporting or community forums—not from Atlassian directly.
[INTERNAL_LINK: Atlassian Intelligence features overview]
What Data Is Atlassian Actually Collecting?
This is the question every IT manager and CISO should be asking. Based on Atlassian's updated privacy documentation, the data collection can include:
Content Data
- Jira issues: Titles, descriptions, comments, and custom field content
- Confluence pages: Page titles, body content, inline comments, and attachments metadata
- Project and space names: Organizational structure information
- User interaction data: How users interact with AI features, query inputs, and feedback signals
Metadata and Usage Data
- Feature usage patterns
- AI query logs (what users ask Atlassian Intelligence)
- Error rates and response quality signals
What's (Reportedly) Excluded
Atlassian has stated that certain sensitive data categories, particularly for customers on Enterprise plans or those who have explicitly configured data residency, may be handled differently. However, the specifics depend heavily on your contract tier and geographic location.
⚠️ Important caveat: The exact scope of data collection can vary based on your Atlassian plan, your data residency settings, and any Data Processing Addenda (DPA) you have in place. Always consult your contract and Atlassian's current privacy policy directly.
Why This Matters: The Opt-In vs. Opt-Out Problem
The core controversy isn't that Atlassian is training AI on data—that's increasingly standard across the SaaS industry. The problem is the default opt-in mechanism.
In privacy law and ethical data practice, there's a significant difference between:
| Approach | What It Means | User Burden |
|---|---|---|
| Opt-in (explicit consent) | User actively agrees before data is collected | Low — you're protected by default |
| Opt-out (default enabled) | Data collection starts automatically; user must take action to stop it | High — you must know to act |
Atlassian chose the opt-out model. This means thousands of organizations—including those handling sensitive client data, proprietary business information, or regulated health and financial data—were automatically enrolled without a clear, prominent notification.
This approach is legally murky in several jurisdictions:
- GDPR (EU): Requires a lawful basis for processing; legitimate interest arguments are being increasingly scrutinized
- CCPA (California): Businesses have rights to limit use of their data; opt-out mechanisms must be clear and accessible
- HIPAA (US Healthcare): Any data that could be considered PHI-adjacent has strict handling requirements that may conflict with AI training use cases
[INTERNAL_LINK: GDPR compliance for SaaS tools]
How to Opt Out of Atlassian's AI Data Collection
If you're an organization admin, here's what you need to do. Note that this process may evolve as Atlassian updates its admin console—verify against current documentation.
For Jira and Confluence (Cloud)
- Log in to admin.atlassian.com with an Organization Admin account
- Navigate to Settings > Data Management (or Privacy settings, depending on your console version)
- Look for "AI and machine learning" or "Atlassian Intelligence" data settings
- Toggle off data use for model training purposes
- Document this action with a timestamp for your compliance records
For Enterprise Customers
If you're on an Atlassian Enterprise plan, you may have additional controls and should:
- Contact your Atlassian account manager directly
- Review your Data Processing Addendum (DPA) for AI-specific clauses
- Request a written confirmation of your opt-out status
For Trello Users
Trello operates under slightly different settings. Check your Trello Workspace settings under Privacy and Data for AI training opt-out options.
💡 Pro tip: After opting out, test by re-checking the settings 24-48 hours later. Some users have reported settings reverting after product updates. Add this to your quarterly security audit checklist.
How Atlassian Compares to Other SaaS Giants on AI Data Use
Atlassian isn't alone in this practice. Let's put this in context:
| Company | AI Data Collection Default | Opt-Out Available | Enterprise Controls |
|---|---|---|---|
| Atlassian | Opt-out (default on) | Yes, via admin console | Yes, with DPA |
| Microsoft 365 | Opt-out (Copilot data) | Yes, via admin center | Yes, extensive |
| Google Workspace | Opt-out | Yes | Yes |
| Slack (Salesforce) | Initially opt-out; updated after backlash | Yes | Yes |
| Zoom | Initially opt-out; reversed after backlash | Yes | Yes |
The pattern is clear: enterprise SaaS companies are defaulting to data collection for AI training, then walking back or clarifying policies after public pressure. Atlassian is following this playbook, for better or worse.
What's notable is that Slack and Zoom both faced significant backlash and were forced to clarify or reverse their policies. Atlassian may face similar pressure as enterprise customers become more aware of the change.
[INTERNAL_LINK: Enterprise AI governance best practices]
The Case For (and Against) Atlassian's Approach
To be fair, this isn't a black-and-white situation. Here's an honest look at both sides.
Arguments in Atlassian's Favor
Better AI = Better Products
Training on real-world usage data genuinely improves AI quality. Atlassian Intelligence features like smart summaries and automated ticket categorization become more accurate with more diverse training data. If you use these features, you're a beneficiary of this data flywheel.
Industry Standard Practice
Nearly every AI-powered SaaS product operates this way. If you use any AI-enhanced tool—from Gmail's Smart Compose to GitHub Copilot—your interactions are influencing model behavior in some form.
Enterprise Safeguards Exist
For customers with appropriate contracts and Enterprise-tier plans, Atlassian does provide meaningful controls. The issue is that smaller customers and those unaware of the change are disproportionately affected.
Arguments Against
Transparency Failure
A policy update buried in documentation is not adequate notice for a material change in data use. Organizations deserve proactive, clear communication—especially when the change involves AI training.
Sensitive Data Risk
Jira and Confluence are home to some of the most sensitive business data in existence: product roadmaps, security vulnerability tickets, HR processes, legal matters, and client project details. The risk profile of this data being used in AI training is qualitatively different from, say, email metadata.
Regulatory Exposure
For organizations in regulated industries, this default opt-in may create compliance gaps that require immediate remediation—and potentially retroactive review of what data may have already been processed.
Practical Recommendations for Organizations
Here's what you should do, categorized by urgency:
Immediate Actions (This Week)
- [ ] Audit your Atlassian admin settings and opt out of AI training data collection if you handle sensitive data
- [ ] Notify your legal and compliance team of the policy change
- [ ] Review your DPA with Atlassian if you're an Enterprise customer
- [ ] Document your opt-out for audit purposes
Short-Term Actions (This Month)
- [ ] Update your data inventory to reflect Atlassian's AI data use as a processing activity
- [ ] Review what data lives in Confluence and Jira — this is a good opportunity to clean up sensitive information that doesn't need to be there
- [ ] Communicate with your team about what Atlassian Intelligence features are enabled and how they work
Ongoing Actions
- [ ] Add Atlassian privacy settings to your quarterly security audit checklist
- [ ] Monitor Atlassian's policy updates — subscribe to their trust and security blog
- [ ] Evaluate alternatives if Atlassian's approach doesn't align with your data governance requirements
Alternative Tools Worth Considering
If Atlassian's data practices are a dealbreaker for your organization, there are alternatives worth evaluating—with honest assessments:
For Project Management:
- Linear — Sleek, developer-focused project management. More transparent AI roadmap, though AI features are still evolving. Better for smaller engineering teams.
- Notion — Excellent Confluence alternative with strong AI features. Has its own AI data use policies you should review separately.
For Self-Hosted Control:
- Jira Data Center — Atlassian's own on-premise offering. You control the data. AI features are limited compared to cloud, but your data stays on your infrastructure.
- GitLab — Strong alternative for software development teams wanting integrated issue tracking with self-hosting options.
Honest assessment: Switching away from Atlassian is a significant undertaking for most organizations. If you have years of institutional knowledge in Confluence and thousands of Jira tickets, migration costs are real. For most teams, opting out and implementing stronger data hygiene practices is the more pragmatic path.
What We Expect Atlassian to Do Next
Based on the pattern we've seen from other SaaS companies, and given the growing regulatory pressure around AI data use, here's what we anticipate:
- Clearer in-product notifications about AI data collection (likely after continued pressure)
- More granular controls — the ability to exclude specific projects or spaces from AI training data
- Expanded transparency reports about what data is used and how
- Potential policy reversals for certain customer segments if enterprise backlash intensifies
Atlassian has a strong track record of responding to customer feedback, so continued community pressure through the Atlassian Community forums is likely to be effective.
Frequently Asked Questions
Q1: Does Atlassian enables default data collection to train AI affect on-premise (Data Center) customers?
No. Atlassian's AI training data collection applies to Cloud products only. If you're running Jira Data Center or Confluence Data Center on your own infrastructure, your data does not flow to Atlassian's AI training pipelines. This is one of the key reasons some enterprises continue to prefer self-hosted deployments despite the higher operational overhead.
Q2: If I opt out now, does that remove data Atlassian already collected?
This is a critical question and the honest answer is: it depends, and Atlassian's documentation isn't fully clear on this point. Opting out should stop future data collection for training purposes, but data already used in training runs may be incorporated into model weights in ways that can't be easily "removed." You should contact Atlassian directly and submit a formal data deletion request under GDPR (if applicable) to understand your specific situation.
Q3: Does Atlassian share my data with third-party AI providers like OpenAI?
Atlassian Intelligence is built on a combination of Atlassian's own models and third-party LLMs, which have included partnerships with providers like Cohere and potentially others. Atlassian's privacy documentation addresses data handling with sub-processors, but you should review the current sub-processor list at atlassian.com/trust for the most up-to-date information. Enterprise contracts may include additional sub-processor restrictions.
Q4: Is Atlassian's data collection practice legal?
In most jurisdictions, yes—but it's legally complex. Atlassian likely relies on "legitimate interests" as the legal basis under GDPR, which is permissible but increasingly scrutinized by EU data protection authorities. Organizations subject to GDPR should conduct their own legitimate interests assessment (LIA) and may want to opt out as a precautionary measure. HIPAA-covered entities should treat this as a potential compliance issue requiring immediate review.
Q5: Will opting out affect my access to Atlassian Intelligence features?
According to Atlassian's documentation, opting out of AI training data collection should not disable your access to Atlassian Intelligence features. You can still use AI-powered summaries, ticket suggestions, and other features—you're simply choosing not to have your data used to train future model versions. However, verify this in your specific plan's documentation, as feature availability can vary.
Final Thoughts and CTA
The fact that Atlassian enables default data collection to train AI isn't inherently scandalous—it's the direction the entire industry is moving. What matters is whether organizations are aware of it, have the tools to control it, and can make informed decisions about their data.
The bottom line: don't let inaction be your data policy.
→ Take action today: Log into admin.atlassian.com, review your AI and data settings, and make a deliberate choice—opt in or opt out—based on your organization's actual risk profile and regulatory requirements.
If you found this article helpful, consider sharing it with your IT team or compliance officer. And if you're navigating broader questions about AI governance in your SaaS stack, [INTERNAL_LINK: enterprise AI governance guide] is a good next read.
Last updated: April 2026. Atlassian's policies are subject to change. Always verify current settings and documentation at atlassian.com/trust.
Top comments (0)