Introduction: Turning Legal Complexity Into Clarity
In fast-paced litigation, legal teams often find themselves drowning in witness statements, deposition transcripts, and conflicting accounts. Parsing hundreds of pages to reconstruct an accurate event timeline isn’t just tedious—it’s a critical task where human oversight can cost a case.
At Hack the Law Cambridge, Momen participated as a sponsor and mentor, guiding participants in exploring how no-code and AI can tackle these kinds of legal challenges. As part of our workshop, we built a working example: an AI-powered Event Timeline Generator designed to help litigation associates and paralegals automate one of their most repetitive and time-consuming workflows.
[👉 See the full demo here.]
This project wasn’t meant to be a polished product—it was a creative, fast-paced proof of concept designed to spark ideas and show how generative AI + structured workflows can unlock new possibilities in legal tech.
Who This Demo Was Built For
This project was created for:
Hackathon and workshop participants
No-code builders exploring legal tech
Anyone curious about combining AI with structured data workflows
By turning a real-world challenge into a buildable, interactive demo, we showed how generative AI + no-code can make legal tech more approachable—even if you're not a lawyer or developer.
Why Build This in Momen (and Not Just Use ChatGPT)?
You might be wondering: Can’t I just paste my transcripts into ChatGPT or Gemini and ask for a timeline?
Here’s why that doesn’t cut it for serious legal workflows—and why we built a full app in Momen instead:
Tailored Workflows Beat One-Off Prompts
LLMs like GPT and Gemini are great for generating text—but they lack structure, repeatability, and reliability.
The timeline tool built with Momen guides users through a structured process:
Ingest statements from multiple witnesses
Trigger consistent AI analysis
Automatically visualize contradictions
This is not a one-off prompt — it’s a repeatable, reliable workflow that anyone on the legal team can use.
Storage and Retrieval Matter
Momen stores everything in a visual database—events, timestamps, source quotes, even contradictions.
That means you’re not just getting an answer—you’re building a mini-system that saves context, supports collaboration, and can be expanded later.
This was a key learning moment for participants: AI outputs are just the start—how you organize and persist those insights matters even more.
User-Friendly, Not Engineer-Only
Instead of writing complex prompts or instructions every time, users just click buttons, a visualized timeline will be then generated, and the conflicts will be detected.
It’s designed for litigation associates and paralegals, not prompt engineers.
Multi-Modal Capabilities with Gemini 2.5
Momen supports advanced models like Gemini 2.5, which can analyze:
Text
Images
Even videos
This opens up powerful future use cases when it involves depositions videos or CCTV clips.
This isn’t just a chatbot — it’s a domain-specific AI application that brings structure, persistence, and scale to a high-stakes legal workflow.
What the App Can Do: Core Features
Key Features:
Raw Text/Video Ingestion: Upload or paste unstructured text or upload videos from witness statements.
AI-Powered Timeline Generation: Automatically extracts key events and timestamps using Gemini 2.5.
Conflict Detection: Highlights inconsistent testimonies side-by-side.
Interactive Timeline Viewer: Explore key moments with exact quotes and references.
Structured Data Storage: Events are saved in a structured format in the database for later use.
Under the Hood: How It’s Built with Momen
Structuring the Data
We created six interconnected tables for this project.

Statement(raw input from the user)Analysis(an id that connects everything from one analysis)Timeline_event(structured timeline data)Event_evidence(the evidence from the original quotes)Conflict(flagged inconsistencies between statements)Event_in_conflict(events in timelines that show conflicts)
The AI-Powered Backend
Two AI Agents, Two Specialized Roles
Two AI agents do the heavy lifting, triggered by Actionflows (backend workflow).

timeline_extractor(ChatGPT-4o)
This agent processes each statement, extracts events, timestamps, and witness names, outputed in a structured format.


conflict_detector(Gemini 2.5)


Compares statements and flags discrepancies—then links each to its timeline event.
Tech Highlight: By using two different AI models for separate tasks, we optimized performance: ChatGPT-4o for structured parsing, and Gemini 2.5 for cross-statement comparison.
Actionflows That Orchestrate AI
We built two main Actionflows:
Events extractor


When the user clicks the "Generate Timeline" button:
Create an Analysis ID A new entry is inserted into the
Analysistable. This ID groups all statements and results for one case.Insert Statements The two input fields are saved as separate records in the
Statementtable — both linked to the sameanalysis_id.-
Trigger
process_statementsActionflow This Actionflow:Retrieves all statements linked to the current
analysis_id-
For each statement:
Calls the
insert_eventsActionflow
-
Trigger
insert_eventsActionflow (One per Statement) For each statement:Calls the
timeline_extractorAI agent (powered by ChatGPT-4o)-
AI parses the content and returns a list of structured timeline events:
Witness name
Event description
Timestamp or time reference
These events are saved in the
timeline_eventtable, each linked back to the original statement and the current analysis.
📌 Technical Highlight: This multi-step flow automates AI processing for multiple inputs and stores the results in real time — without writing a single line of code.
-
Conflicts detector
When the user clicks the "Detect Conflicts" button:
-
Trigger
insert_conflictsActionflow This Actionflow:Collects all events from the current analysis
Passes them to the
conflict_detectorAI agent (powered by Gemini 2.5)The agent analyzes overlapping timelines and flags conflicting events
-
Format & Store Conflict Data A custom code block:
Parses the AI response
Inserts new records into the
Conflicttable andEvent_in_conflicttableLinks each conflict to the relevant timeline events and statements
-
Trigger UI Update in Real Time Because the frontend List Component is subscribed to the
timeline_eventtable:Events with conflicts are immediately updated
The timeline shows red markers for conflicting points
Related quotes and evidence appear in the sidebar
📌 Technical Highlight: By combining Gemini’s advanced reasoning with Momen’s structured data handling, the app provides real-time legal insights with complete traceability.
How the Frontend Talks to the Backend
Every user interaction — from entering a statement to detecting conflicts — is tied directly to backend Actionflows. Here’s how it works:
Page 1: Submit & Generate Timeline
-
Input Fields
Two simple text inputs for witness statements.
-
"Generate Timeline" Button
This button kicks off the entire backend chain:
Triggers an insert to the
Analysistable to create a new session.Uses a batch mutation to insert both statements into the
Statementtable.-
Fires the
process_statementsActionflow, which runs AI analysis and stores structured timeline events.
Page 2: Interactive Timeline Viewer
-
Timeline List View
Built with Momen’s List component, this view shows:
Each timeline event
Its source quote or evidence
-
Real-time updates (subscribed to
timeline_eventtable)
-
Conflict Visualization
Events flagged by the conflict detector turn red
The right sidebar displays conflict-specific quotes and who said what
-
"Detect Conflicts" Button
Triggers the
insert_conflictsActionflow, which:Calls Gemini 2.5 via the
conflict_detectoragentUpdates the database with conflict metadata
-
Instantly reflects changes in the UI via conditional views

-
Inspiring the Next Wave of Legal AI Builders
This project was completed in just two days during a hackathon setting. Despite the tight timeframe, it was able to demonstrate real-time processing of multiple pages of legal statements and extract a structured timeline in under 30 seconds.
The total cost to build this prototype using Momen was approximately $99. This included two AI agents using different LLM models (ChatGPT-4o and Gemini 2.5), as well as storage, UI, and backend logic. For a team that needs to build internal tools or domain-specific SaaS products quickly. While the demo focused on legal use cases, the same approach could easily apply in fields like healthcare, insurance, or automotive services, where accuracy, traceability, and speed of analysis are equally critical.
And more importantly: this project isn't about replacing legal professionals—it’s about inspiring builders to explore how AI can support them with smarter, faster tools.
FAQ
What problem does the Event Timeline Generator solve?
It automates the manual process of reading through multiple witness statements or transcripts to build a factual timeline and identify contradictions. This helps litigation teams work faster and avoid missing key inconsistencies in testimonies.
Why not just use ChatGPT or Gemini directly?
While LLMs are powerful, legal workflows need structure, consistency, and memory. Momen provides a repeatable process: you can upload multiple statements, generate timelines, detect conflicts, and store results in a structured, searchable format—all without writing prompts every time.
What AI models were used in the demo?
The app uses two specialized AI agents:
timeline_extractorpowered by ChatGPT-4o, which extracts events and timestamps.conflict_detectorpowered by Gemini 2.5, which compares statements to find contradictions.
How was the app built without writing code?
The entire app was built using Momen’s no-code platform, which includes visual database modeling, backend workflows (Actionflows), frontend logic, and AI integration. Buttons, inputs, and AI actions are all configured visually.
Can this handle videos or images too?
Yes, Momen supports multi-modal models like Gemini 2.5, which opens the door to processing video evidence like CCTV or deposition footage in future versions.
How long did the project take to build?
The demo was built in two days during a hackathon. Despite the short timeline, it successfully handled real-time processing of multiple pages of statements.

Top comments (0)