This is a submission for the Algolia Agent Studio Challenge: Consumer-Facing Conversational Experiences
What I Built
Lorance is an AI-powered project intelligence assistant built to solve a problem I kept running into: important work lives inside messy project artifacts, but extracting clear answers and execution-ready tickets from them is harder than it should be.
Teams don’t lack documentation. They lack clarity.
PRDs, meeting notes, Slack threads, and design docs are full of decisions, assumptions, and implied work — but that information is fragmented and easy to misinterpret. Lorance turns those unstructured inputs into grounded answers and actionable tickets that teams can actually plan against.
On the surface, the experience is conversational: “Ask a question about the project.”
Under the hood, the output is intentionally structured and action-first.
With Lorance, users can:
- Ask direct questions about the project (“What tech are we using?”, “What’s blocked?”)
- Generate tickets with clear scope and acceptance criteria
- Edit and save tickets and documents in place
- See every answer grounded in the source material that produced it
The core idea is clarity you can trust. Lorance is deliberately constrained to what exists in the indexed documents. If something can’t be supported by a source, it won’t be invented.
Demo
Live Demo: https://lorance.vercel.app
Backend: https://lorance-production.up.railway.app/api/health
Repository: https://github.com/Tawe/Lorance
Typical flow:
- Upload project documents or notes
- Ask a question in the chat
- Receive a direct, source-grounded answer
- Generate or refine tickets in the side panel
Walkthrough of the Frontend Functionality:
A look at the Algolia Backend
How I Used Algolia Agent Studio
Lorance is intentionally non-conversational and action-first. Instead of chatting with an agent, users are presented with a structured view of what work now exists.
Algolia Agent Studio is foundational to this approach. The agent only reasons over content that Algolia has already identified as likely to contain action.
What I Indexed
I index document chunks and tickets into Algolia:
- PRDs, meeting notes, architecture docs, and chat logs
- Action-bearing text segments
- Ticket records with a consistent schema
Documents are indexed with an additional signal indicating how likely a segment is to contain actionable intent. At index time, Lorance scores for:
- Imperative language ("do", "follow up", "send")
- Future commitments ("I’ll", "we need to")
- Soft obligations ("someone should", "we might want to")
- Temporal markers ("by Friday", "before launch")
This signal directly influences retrieval and keeps the agent focused.
Targeted Prompting
The Agent Studio prompt constrains the system to:
- Classify actions as explicit or implied
- Suggest ownership only when defensible
- Score confidence based on linguistic clarity
- Cite every action back to the retrieved source text
This grounding is what prevents hallucinated tasks and builds trust.
Data & Accounts
Lorance is multi-tenant by design.
Authentication is handled through Firebase, and every user belongs to a single workspace derived from their Firebase identity. That workspace_id becomes the boundary for everything else in the system.
Every document and every ticket indexed in Algolia includes that workspace_id as an attribute. Retrieval is always scoped to the active workspace, which means:
- Users only ever see their own documents and tickets
- The agent reasons exclusively over workspace-owned data
- There’s no cross-project or cross-account leakage
- Ownership and visibility are enforced at the retrieval layer, not just the UI
The agent doesn’t need to “know” about permissions, it only ever receives data it’s allowed to reason over.
Why Fast Retrieval Matters
This product only works if retrieval is fast, scoped, and reliable.
Algolia’s sub‑500ms retrieval enables a tight loop:
- Ask a question
- Retrieve relevant context
- Generate a grounded answer or ticket
- Refine immediately
That speed matters because it keeps users in a thinking flow, not a waiting one. If retrieval is slow or noisy, the whole experience collapses into summaries and guesses instead of clarity.
Fast, contextual retrieval allows Lorance to:
- Answer questions over large document sets in real time
- Generate tickets without long analysis delays
- Update answers immediately after documents or tickets change
- Make it obvious when the documents simply don’t contain an answer
This isn’t about replacing human judgment. It’s about surfacing work clearly and quickly enough that judgment can actually happen.
Notable Implementation Details
A few design choices were important to get right if this was going to behave like a real system, not a demo.
Workspace Isolation, Enforced End‑to‑End
Lorance is multi‑tenant by default.
Every document and every ticket carries a workspace_id derived from the authenticated Firebase user. That workspace identifier is:
- attached at write time
- required on every read
- enforced in backend filters, not just the UI
The agent never sees data outside the active workspace. Ownership and visibility are retrieval‑level guarantees, not assumptions.
Scoped Algolia Search Keys
Rather than exposing a global search key, the backend issues a scoped Algolia key per user/workspace.
That key:
- restricts access to specific indices
- enforces filtering by
workspace_id - prevents cross‑tenant access even if a request is tampered with
Search isolation is enforced by Algolia itself, not just application logic.
Query‑First Retrieval
Every answer starts with query‑focused retrieval.
The system first searches for content directly relevant to the user’s question. Only if nothing meaningful is found does it fall back to a broader workspace search.
This keeps responses focused and makes it obvious when the documents don’t contain an answer.
Structured Answer Contract (with Repair)
Agent Studio responses are required to conform to a strict JSON schema.
Malformed output is sanitized where possible (trailing commas, malformed arrays), required fields are repaired when safe, and unrecoverable responses fail gracefully.
The system prefers no answer over a misleading one.
Ticket Validation Pipeline
Generated tickets don’t go straight to storage.
They pass through a validation layer that:
- normalizes fields to a canonical shape
- fills required structural gaps
- records validation results for inspection
This keeps ticket output consistent, predictable, and safe to use downstream.
Lorance treats action clarity as a first‑class problem, not a side effect of note‑taking or chat. Algolia Agent Studio makes it possible to build something fast, grounded, and explainable, which, in my experience, is what real teams actually need.
Top comments (0)