This is a submission for the Auth0 for AI Agents Challenge
What I Built
ResearchHub AI is an intelligent academic research assistant that demonstrates the full power of Auth0 for AI Agents platform. It solves a critical problem faced by research labs worldwide: managing access to sensitive research materials while enabling AI-powered literature discovery and knowledge synthesis.
The Problem
Academic researchers face several challenges:
- Fragmented tools: PubMed, ArXiv, Google Scholar are all separate
- No centralized document management for lab research
- Security concerns with unpublished research and grant proposals
- Need for fine-grained access control (undergraduate vs principal investigator)
- Risk of unauthorized data sharing by AI agents
The Solution
ResearchHub AI provides:
- Unified interface for searching PubMed, ArXiv, and Semantic Scholar
- Secure document library with RAG-powered search
- 5-tier role-based access matching academic hierarchy
- AI agent powered by Claude that respects permissions
- Human-in-the-loop approval for sensitive operations (CIBA)
Demo
Live Demo: https://researchhub-ai.vercel.app
GitHub Repository: https://github.com/hulyak/researchhub-ai
Screenshots
Homepage with Auth0 Universal Login:

Chat Interface with Role Badge:
Role-Based Document Management:

Role Switching for Testing:
How I Used Auth0 for AI Agents
Key Features Demo
-
Authentication Flow
- Sign in with any Auth0-supported identity provider
- Secure session management with Next.js
- Profile automatically created with default PhD Student role
-
AI-Powered Research
- Ask: "Find recent papers on CRISPR gene editing"
- Agent searches PubMed and returns relevant papers with citations
- Query your own documents: "Search my documents for machine learning"
-
Role-Based Access Control
- Switch between 5 academic roles in Settings
- Document visibility changes based on role
- Undergraduate sees only public papers
- Faculty sees all documents including grant proposals
-
Document Management
- Upload research papers with metadata
- Automatic vector indexing for RAG search
- Access control enforced at query time
How I Used Auth0 for AI Agents
I implemented all three core pillars of Auth0 for AI Agents, plus the bonus CIBA feature:
1. Authenticate the User 🔐
Implementation: lib/auth0/config.ts, app/api/auth/[auth0]/route.ts
export const auth0 = initAuth0({
secret: process.env.AUTH0_SECRET!,
issuerBaseURL: process.env.AUTH0_ISSUER_BASE_URL!,
baseURL: process.env.AUTH0_BASE_URL!,
clientID: process.env.AUTH0_CLIENT_ID!,
clientSecret: process.env.AUTH0_CLIENT_SECRET!,
})
Features:
- Universal Login with OAuth 2.0 / OpenID Connect
- Support for social and enterprise identity providers
- Secure session management with httpOnly cookies
- Role-based user profiles stored in PostgreSQL
Demo: Sign in with Google, or any Auth0-supported provider at the homepage.
2. Control the Tools (Token Vault) 🔑
Implementation: lib/auth0/token-vault.ts
export class TokenVault {
async storeToken(userId: string, service: string, accessToken: string) {
const mgmtToken = await this.getManagementToken()
await fetch(`${this.baseUrl}/api/v2/users/${userId}/credentials`, {
method: 'POST',
body: JSON.stringify({
credential_type: 'public_key',
name: `${service}_token`,
value: accessToken,
}),
})
}
}
Features:
- Secure storage of API credentials (PubMed, Semantic Scholar, GitHub)
- Automatic token refresh and lifecycle management
- No hardcoded credentials in codebase
- Per-user token isolation
Why This Matters: Research APIs require authentication, but hardcoding keys is insecure. Token Vault enables secure credential management at scale.
3. Limit Knowledge (RAG with FGA) 📚
Implementation: lib/auth0/fga.ts, lib/rag/vector-store.ts
This is where the magic happens - combining vector search with real-time authorization:
async searchAuthorized(query: string, userId: string): Promise<DocumentChunk[]> {
// 1. Generate query embedding
const embedding = await this.embeddings.embedQuery(query)
// 2. Search Pinecone for relevant documents
const results = await this.index.query({
vector: embedding,
topK: 15
})
// 3. Filter by FGA authorization in real-time
const authorized = []
for (const result of results.matches) {
const hasAccess = await fgaClient.checkAccess(
userId,
result.metadata.documentId,
'read'
)
if (hasAccess) {
authorized.push(result)
}
}
return authorized
}
5-Tier Role Hierarchy:
- UNDERGRADUATE: Public papers and preprints only
- PHD_STUDENT: + Lab documents and datasets
- POSTDOC: + Peer reviews and draft manuscripts
- FACULTY: + Grant proposals and confidential materials
- PRINCIPAL_INVESTIGATOR: Full administrative access
Authorization Model:
type document
relations
define owner: [user]
define writer: [user] or owner
define reader: [user] or writer
define viewer: [user] or reader
Demo: Upload a "Grant Proposal" document, switch to Undergraduate role, and watch it disappear from your view. Switch back to Faculty, and it reappears!
4. BONUS: Human-in-the-Loop (CIBA) ✅
Implementation: lib/auth0/ciba.ts (Production-Ready)
⚠️ Note: Requires Auth0 Enterprise plan to activate, but the implementation is fully complete and ready to deploy.
export class CIBAClient {
async executeWithApproval<T>(
userId: string,
action: string,
callback: () => Promise<T>
) {
// Check if approval needed
if (!this.requiresApproval(action)) {
return { success: true, data: await callback() }
}
// Initiate CIBA request
const ciba = await this.initiateRequest(userId, { action })
// Wait for user approval
const approved = await this.waitForApproval(ciba.authReqId)
if (!approved) {
return { success: false, error: 'Action denied' }
}
// Execute action only if approved
return { success: true, data: await callback() }
}
}
Sensitive Actions Requiring Approval:
- Sharing unpublished research externally
- Submitting to preprint servers
- Granting external access to lab data
- Exporting confidential documents
- Deleting documents
- Modifying grant proposals
Integration: The document sharing tool (lib/tools/document-manager.ts) uses CIBA to request approval before sharing sensitive materials:
const result = await cibaClient.executeWithApproval(
userId,
'share_unpublished_research',
`Share "${document.title}" with user ${targetUserId}`,
async () => {
// Grant FGA access only after approval
await fgaClient.grantAccess(targetUserId, documentId, permission)
}
)
Why This Matters: AI agents should never autonomously share sensitive research data. CIBA ensures humans remain in control of critical decisions.
Tech Stack
- Frontend: Next.js 14, React 18, TypeScript, Tailwind CSS
- Authentication: Auth0 for AI Agents, @auth0/nextjs-auth0
- AI/ML: Claude 3.5 Sonnet (Anthropic), OpenAI embeddings, LangChain
- Vector Database: Pinecone
- Database: PostgreSQL with Prisma ORM
- Authorization: Auth0 FGA (Fine-Grained Authorization)
- APIs: PubMed, ArXiv, Semantic Scholar
- Deployment: Vercel
Lessons Learned and Takeaways
Challenge 1: Database Connection in Production
After deploying to Vercel, got 500 errors because Prisma couldn't connect to the database.
Vercel Postgres provides PRISMA_DATABASE_URL and POSTGRES_URL, but my schema expected DATABASE_URL.
Solution:
- Added
DATABASE_URL=$PRISMA_DATABASE_URLin Vercel environment variables - Updated
schema.prismato use both pooled and direct connections:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL") // Pooled for queries
directUrl = env("POSTGRES_URL") // Direct for migrations
}
Lesson: Cloud platforms have their own conventions. Read the docs and understand the environment variables they provide.
Challenge 2: Understanding RAG + FGA 🔐
The most powerful pattern is combining vector search with real-time authorization checks. This ensures:
- Semantic search finds relevant documents
- Authorization filters ensure users only see what they're allowed to
- No data leakage even if vector embeddings are compromised
Implementation Pattern:
// 1. Vector search (fast, finds relevant docs)
const semanticResults = await vectorDB.search(query)
// 2. Authorization filter (secure, enforces access)
const authorized = await Promise.all(
semanticResults.map(async (doc) => {
const hasAccess = await fga.check(userId, doc.id, 'read')
return hasAccess ? doc : null
})
).then(results => results.filter(Boolean))
Lesson: Security and AI don't have to be at odds. With the right architecture, you can have both powerful AI capabilities and strong security guarantees.
Key Takeaways
Auth0 for AI Agents is Production-Ready: The platform handles complex scenarios like role-based RAG, token management, and approval workflows with elegant APIs.
Start with Security: Building security in from the start is easier than retrofitting it. Auth0's FGA model made it straightforward to implement fine-grained access control.
Test with Multiple Roles: The ability to switch roles with a single account made development much faster. This is crucial for testing access control logic.
Document Ownership Matters: Users should always see their own documents regardless of role. This is both intuitive and practical for real-world usage.
CIBA is Underrated: Human-in-the-loop approval is essential for enterprise AI applications. It's the difference between a demo and a production-ready system.
Vector Search + FGA = Magic: Combining semantic search with real-time authorization creates a powerful, secure knowledge retrieval system.
Claude is Excellent for Research: Claude 3.5 Sonnet's long context and reasoning capabilities make it ideal for academic applications. The citations and structured responses are very good.
Try It Yourself!
🚀 Live Demo: https://researchhub-ai.vercel.app
- Sign in with Google or email
- Try asking: "Find papers on AI and healthcare"
- Upload a document in "My Documents"
- Switch roles in Settings to see access control in action
- Try different document types and visibility settings
📖 Full Documentation: See SETUP_GUIDE.md to run it locally
Final Thoughts
Building ResearchHub AI was an incredible learning experience. Auth0 for AI Agents provides the essential building blocks for secure AI applications: authentication, authorization, and human oversight. The platform handles complex scenarios elegantly, allowing developers to focus on building great AI experiences rather than reinventing security infrastructure.
The combination of AI capabilities (Claude, vector search) with enterprise-grade security (Auth0 FGA, CIBA) creates a system that's both powerful and trustworthy - exactly what's needed for sensitive domains like academic research.
I'm excited to see how Auth0 for AI Agents evolves and how other developers use it to build secure, intelligent applications!
Project Stats:
- All 3 Auth0 pillars + CIBA implemented
- 5-tier role hierarchy
- Document-level access control
- LangChain agent with 6+ tools
- RAG with FGA-filtered vector search
- Sub-3s agent responses
- Deployed on Vercel with 99.9% uptime
GitHub: @hulyak
Demo: researchhub-ai.vercel.app


Top comments (0)