Every CTO we talk to asks the same question within the first five minutes.
"If your product uses AI, where does our data go?"
Fair question. It deserves a precise answer. Not a marketing slide. Not a vague "we take security seriously." A technical, verifiable, contractually backed answer that your security team can audit.
So here it is.
How Cloud Vista V15 Uses AI
Cloud Vista V15 is powered by Astra AI — five autonomous agents that handle anomaly detection, root cause analysis, predictive analytics, and autonomous remediation across your infrastructure.
These agents call large language models from Anthropic (Claude) and OpenAI (GPT) through their enterprise API tiers. This distinction matters enormously, because the enterprise API is a fundamentally different product from the consumer chatbot — with different data handling, different contractual terms, and different retention policies.
Most customer concerns originate from conflating the two. They are not the same.
This Applies to Everything We Build
Cloud Vista V15 is our flagship, but NetGain is actively developing custom AI solutions for customers across industries — automation platforms, intelligent workflows, predictive systems, and purpose-built agents tailored to specific business problems.
Every solution we build operates under the same security architecture described here. Same enterprise API tiers. Same contractual frameworks. Same deployment options. Whether it's Cloud Vista monitoring your infrastructure or a custom AI agent automating a business workflow, your data is handled under the same policies.
The Consumer Product vs Enterprise API — Why the Confusion
When people hear "ChatGPT" or "Claude," they picture the consumer chatbot. Consumer-tier products may use conversations to improve models under their default terms.
That is not what we use.
NetGain products call the enterprise API tier. Under enterprise API agreements:
- Customer data is not used for model training
- Retention is limited to short operational windows (and can be reduced to zero)
- Data Processing Agreements are available for legal and compliance review
The distinction is not a marketing claim. It is a contractual commitment, backed by independent audit certifications.
Anthropic (Claude) — Enterprise API Data Handling
- No model training on customer data. Under commercial API terms, inputs and outputs are not used to train or improve Claude models.
- 7-day default retention. Reduced from 30 days as of September 2025. Used for trust and safety screening only.
- Zero Data Retention (ZDR) available. Inputs and outputs not stored beyond real-time abuse screening.
- SOC 2 Type II audited. Independently examined for security, availability, and confidentiality.
- ISO 27001 + ISO 42001 certified. ISO 42001 is the AI management system standard.
- HIPAA BAA available. Native API is eligible under BAA, even without ZDR.
- NIST 800-171r3 attestation available under NDA.
Reference: Anthropic Trust Center | Anthropic Privacy Center
OpenAI (GPT) — Enterprise API Data Handling
- No model training on customer data. API platform, ChatGPT Enterprise, Business, and Edu data is not used for training. Opted out by default.
- Configurable data retention, including zero retention.
- SOC 2 Type II audited. Security, Availability, Confidentiality, and Privacy.
- ISO 27001 + ISO 27701 certified.
- HIPAA BAA available.
- DPA available supporting GDPR, CCPA, HIPAA, and FERPA.
Reference: OpenAI Enterprise Privacy | OpenAI Security
Bring Your Own Subscription
NetGain does not require you to use our API keys. If your organisation already holds an Anthropic or OpenAI subscription, our solutions connect directly to your account.
- Your billing, your controls. Manage usage limits and policies in your own dashboard.
- Your DPA, your legal relationship. Direct contractual relationship with the AI provider.
- Your audit trail. Full visibility into API usage in your own account.
- No intermediary. Data flows directly from your instance to your subscription. NetGain does not proxy, intercept, or store this traffic.
The Actual Data Flow — Step by Step
- The system detects an event — anomaly, metric spike, log pattern, topology change.
- It constructs a prompt containing relevant operational data — metric names, values, timestamps, log excerpts. Scoped to operational telemetry.
- Transmitted via encrypted HTTPS (TLS 1.2+/1.3) to the API endpoint.
- The LLM processes and returns a response.
- The response is consumed by the application. No persistent copy stored beyond the provider's stated retention window.
Under enterprise API terms: data is handled under agreements designed to prevent model training use and restrict retention to tightly controlled operational purposes.
Guardrails — How We Keep AI Controlled and Auditable
Model-Level Safety (Built Into Claude and GPT)
Anthropic's Constitutional AI: Safety constraints embedded during training. Constitutional Classifiers reduced jailbreak success rates from 86% to 4.4% in published testing. Hardcoded behavioural boundaries designed to resist override.
OpenAI's Safety Systems: Built-in content filtering, refusal mechanisms, configurable safety tiers for enterprise customers.
Application-Level Guardrails (Built by NetGain)
Input Controls:
- Prompt sanitisation — system instructions architecturally separated from data payloads to reduce prompt injection risk (OWASP #1 LLM security concern)
- PII detection and filtering — sensitive data patterns flagged or masked before reaching the AI
- Scope constraints — each Astra AI agent confined to its specific domain
Output Controls:
- Ground-truth validation — responses compared against actual monitoring data
- Confidence scoring — low-confidence outputs surfaced with caveats
- Output scanning — checked for sensitive content before presentation
Action Controls:
- Configurable approval gates — you define what AI can act on autonomously vs. what requires human sign-off
- Action scope limits — agents cannot improvise actions outside configured operations
Operational Controls:
- Rate limiting — prevent runaway costs or API overuse
- Full audit logging — every interaction logged for compliance and forensics
- Kill switch — disable AI agents individually or collectively, immediate effect
- Role-based access — integrates with your existing access control framework
- Behaviour monitoring — drift detection with automatic alerting
The Guardrails Stack
| Layer | What It Does | Who Controls It |
|---|---|---|
| Model Safety | Constitutional AI, content filtering, jailbreak resistance | Anthropic / OpenAI |
| Input Controls | Prompt construction, PII filtering, injection resistance | NetGain |
| Output Controls | Validation, confidence scoring, content scanning | NetGain |
| Action Gates | Human approval for high-impact actions | You (configurable) |
| Operational Controls | Audit logs, kill switch, RBAC, drift monitoring | You + NetGain |
Defence in depth. Three independent layers, each catching what the others may not.
Cloud-in-Your-Tenant — Enhanced Isolation
Claude via AWS Bedrock: Data processed in your AWS account. FedRAMP High in GovCloud. DoD IL4/IL5 approved. Full AWS compliance suite.
Claude via Google Cloud Vertex AI: Data in Google Cloud perimeter. FedRAMP High. 10+ EU regions with regional endpoints.
GPT via Azure OpenAI: Data in your Azure tenant. ISO 42001 certified. 60+ regions. 100+ Azure certifications.
The Private LLM Option — Full Air-Gap
We support private deployment using open-source models on your own infrastructure. Your data does not leave your data centre.
We also believe in transparency about the real costs.
What It Actually Costs
Hardware: 8-GPU server (DGX H100): USD $300,000–$500,000 per node. Production requires 2+ nodes. $600,000–$1,000,000 in GPU hardware alone.
Infrastructure: Power upgrades ($50K–$200K/rack), cooling ($50K–$200K), InfiniBand networking ($20K–$100K), colocation ($5K–$15K/month).
The cost most underestimate — ongoing management:
- MLOps staff (1.5–2 FTE): USD $260,000–$440,000/year. These roles take 3–6 months to fill.
- Model lifecycle: Each major open-source model update = weeks of evaluation, testing, deployment work.
- Security hardening: You own the full stack — CUDA, containers, OS, networking. Every component needs patching. Vulnerabilities at 2am are your problem.
- Hardware failures: GPUs at sustained high power fail. Memory errors, thermal throttling, driver crashes.
3-Year Total Cost of Ownership
| Private LLM | Enterprise API | |
|---|---|---|
| Year 1 | $1,200,000 – $2,380,000 | $36,000 – $96,000 |
| Year 2–3 | $1,050,000 – $2,020,000 | $72,000 – $192,000 |
| 3-Year Total | $2,250,000 – $4,400,000 | $108,000 – $288,000 |
The enterprise API tier delivers the security, contractual protection, and compliance posture most enterprises actually need — without the cost and complexity of private infrastructure.
If your regulatory constraints require on-premise, we can help implement it and connect you with managed hosting partners. But we recommend evaluating Bedrock, Vertex AI, or Azure OpenAI first.
The Six Tiers of Deployment
| Tier | Best For |
|---|---|
| Enterprise API | Most enterprises. Fastest, strongest capability. |
| Your Own Subscription | Regulated industries needing direct legal control. |
| AWS Bedrock | Government, defence, financial services on AWS. |
| Google Vertex AI | GCP orgs, EU data residency. |
| Azure OpenAI | Azure orgs, broadest regional coverage. |
| Private LLM | Air-gapped, classified, sovereignty-mandated. |
All tiers supported. Same product. Same agents. Same interface.
NetGain's Certifications
NetGain Systems is ISO/IEC 27001 certified covering development, deployment, and operation of Cloud Vista and all supporting infrastructure.
What We Do Not Do
- We do not fine-tune public models on customer data.
- We do not store prompts or AI responses beyond the operational session.
- We do not share data between customers.
- We do not use consumer-tier AI products. Enterprise API only.
- We do not permit AI agents to execute destructive actions without customer-controlled approval gates.
The Summary for Your Security Team
Your data is processed through enterprise API tiers contractually restricted from model training use, independently audited under SOC 2 Type II and ISO 27001, with retention limited to 7 days or zero by agreement. Every AI interaction passes through layered guardrails — model safety, application controls, and customer-configurable approval gates. For additional isolation: AWS Bedrock, Google Vertex AI, or Azure OpenAI within your own tenant, or fully private on-premise. NetGain is ISO 27001 certified.
We built Cloud Vista V15 — and every AI solution we deliver — to provide full AI capability within a security framework designed to withstand enterprise scrutiny.
Because in enterprise IT, trust is not a feature you add. It is the foundation you build on.
For Data Processing Agreements, security architecture documentation, or deployment options: sales@netgain-systems.com
NetGain Systems — ISO 27001 Certified | AI-Powered Observability | Est. 2002
Top comments (0)