TL;DR
I built LGPD Sentinel AI — a 100% open-source tool that runs automated LGPD compliance audits using a local AI model (Ollama + llama3.1). Zero data sent to the cloud. Zero PII exposure.
GitHub: https://github.com/ldsjunior-ui/lgpd-sentinel-ai
The Problem
Brazil's LGPD (Lei Geral de Proteção de Dados) has been in full effect since 2021, with fines up to 2% of revenue (capped at R$50 million per incident). Yet most small and mid-size Brazilian companies still handle compliance manually — spreadsheets, legal consultants, quarterly reviews.
The problems with that approach:
- Expensive: DPO consultants charge R$3,000–15,000/month
- Slow: Manual audits take weeks
- Risky: You're sending your sensitive data descriptions to third-party cloud AI tools
I wanted to fix all three.
What I Built
LGPD Sentinel AI automates the entire compliance audit pipeline:
Input: describe your data flows / upload documents
→ AI analysis (local, private)
→ Risk classification (high/medium/low)
→ DPIA report
→ DSR management
→ Compliance dashboard
Stack
- Backend: FastAPI (Python)
- Frontend: Streamlit
- AI: Ollama + llama3.1 (runs 100% locally — no API keys, no cloud)
- DB: SQLite (zero config) or Supabase (optional)
- License: Apache 2.0
Key Features
📊 Data Mapping
Automatically identifies and classifies personal data in your systems. The AI tags each field by LGPD category (sensitive, non-sensitive), legal basis, and data controller/processor relationship.
🔍 DPIA (Data Protection Impact Assessment)
Generates a full DPIA report with risk scoring. Each identified risk gets a mitigation recommendation — all driven by the local LLM.
📝 DSR Management
Handles Data Subject Requests (access, deletion, correction, portability) with automated tracking and response templates.
🛡️ Risk Assessment
Every audit produces a risk matrix: high / medium / low, with article-level LGPD references (e.g., "Art. 7 — legal basis missing").
Why Local AI Matters for Compliance
Here's the irony most people miss: using a cloud AI tool to analyze your LGPD compliance is itself an LGPD risk.
When you paste your data flow descriptions into ChatGPT or Claude, you're potentially sending personal data or sensitive business information to a third-party processor without proper DPA (Data Processing Agreement) in place.
LGPD Sentinel AI solves this by running the entire inference pipeline locally via Ollama. Your data never leaves your machine.
Getting Started
# Clone the repo
git clone https://github.com/ldsjunior-ui/lgpd-sentinel-ai
cd lgpd-sentinel-ai
# Install Ollama and pull the model
ollama pull llama3.1
# Install dependencies
pip install -r requirements.txt
# Run
python -m uvicorn src.main:app --reload --port 8000
# In another terminal:
streamlit run frontend/app.py
That's it. No paid API keys. No Docker required for basic usage. Works on any machine that can run Ollama.
What's Next
- [ ] PDF/DOCX document ingestion for automated mapping
- [ ] Multi-user support with role-based access
- [ ] ANPD notification templates
- [ ] Portuguese + English UI
Support the Project
If this is useful to you, there are a few ways to help:
- ⭐ Star the repo: https://github.com/ldsjunior-ui/lgpd-sentinel-ai
- 💬 Open issues / discussions — feature requests, bug reports, use cases
- 💙 GitHub Sponsors — tiers from $5/month to support continued development
This is a solo open-source project built to make LGPD compliance accessible to every Brazilian company, not just those who can afford a DPO consultant.
Built with FastAPI, Streamlit, Ollama, and a genuine frustration with expensive compliance tooling.
Top comments (0)