DEV Community

Cover image for Building a Localized Legal Assistant with Angular, Firebase & Gemini
Duncan Maina
Duncan Maina

Posted on

Building a Localized Legal Assistant with Angular, Firebase & Gemini

Building AI applications for the legal sector presents a unique set of engineering challenges. General-purpose LLMs are notorious for hallucinating laws or confidently quoting United States federal law to users living in Nairobi or Kampala.

To solve this, I built SheriaSenseEA, a localized AI legal assistant specifically engineered for East Africa (Kenya, Uganda, and Tanzania).

Rather than relying on generic chat completion, this architecture utilizes strict System Prompting, forced frontend context injection, and multimodal document analysis to keep the AI strictly within the bounds of East African constitutional law.

Here is a deep dive into how it is built using Angular, Firebase Cloud Functions, and the Gemini 2.5 Pro model.

The Architecture of Constraint

The biggest risk in LegalTech is the AI giving advice outside its jurisdiction or acting as a general-purpose chatbot. To prevent this, SheriaSenseEA employs a two-tier constraint system: Frontend Context Injection and Backend Refusal Protocols.

1. Frontend Context Injection

Users switch between countries (Kenya, Uganda, Tanzania) in the Angular UI. Instead of trusting the LLM to remember this setting, the Angular service intercepts every message and forcibly prepends a hidden system directive before it reaches the network layer.

// Inside chat.component.ts
async sendMessage() {
  let finalPrompt = this.userInput;
  const country = this.geminiService.selectedCountry; // e.g., 'Kenya'

  // Forced Context Injection
  finalPrompt = `[Context: The user is located in ${country}. Answer strictly according to ${country} laws.] ${finalPrompt}`;

  if (this.isSwahili) {
    finalPrompt += " (Reply in fluent Swahili, use simple legal terms)";
  }

  // Send the payload to Firebase
  const response = await this.geminiService.chatWithGemini(finalPrompt);
}

Enter fullscreen mode Exit fullscreen mode

This guarantees that even if a user simply types "What are my rights if arrested?", the LLM receives "[Context: The user is located in Kenya...] What are my rights if arrested?" and responds with the correct constitutional clauses.

2. The Backend Refusal Protocol

The Firebase Cloud Function acts as the secure gatekeeper holding the Gemini API key. Here, we instantiate GoogleGenerativeAI with a highly aggressive System Instruction.

Notice the Strict Mandate and Refusal Protocol. We explicitly teach the LLM what not to answer, effectively neutralizing prompt-injection attempts aimed at getting the bot to discuss politics or foreign laws.

import { onRequest } from "firebase-functions/v2/https";
import { GoogleGenerativeAI } from "@google/generative-ai";

export const getLegalAdvice = onRequest({ cors: true }, async (req, res) => {
  const { prompt, country, file } = req.body;
  const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);

  const SYSTEM_PROMPT = `
  You are SheriaSenseEA, a specialized legal assistant.
  Prioritize the Constitution and Laws of **${country}**.

  **YOUR STRICT MANDATE:**
  1. **SCOPE:** Deal ONLY with the Laws of Kenya, Uganda, and Tanzania.
  2. **REFUSAL PROTOCOL:** If a user asks about:
     - Politics (unrelated to law)
     - General Knowledge 
     - Laws of countries outside East Africa (e.g., US Law)

     **YOU MUST REPLY:** "My expertise is strictly limited to legal matters in East Africa. I cannot answer questions about that topic."

  **DISCLAIMER:** End EVERY message with: "**Disclaimer: This is for educational purposes only. Consult a qualified Advocate.**"
  `;

  const model = genAI.getGenerativeModel({ 
    model: "gemini-2.5-pro", 
    systemInstruction: SYSTEM_PROMPT,
  });

  // Execution logic...
});

Enter fullscreen mode Exit fullscreen mode

Multimodal Document Analysis

A legal assistant isn't just about answering questions; it needs to review contracts and lease agreements.

When a user uploads a PDF or Image, the Angular frontend converts it into a Base64 string. The Firebase backend dynamically reconstructs this into an inlineData object that the Gemini Vision capabilities can process.

// Constructing the multimodal payload in Node.js
const contentParts: any[] = [prompt];

// If the Angular frontend sent a Base64 file, append it to the LLM context
if (file && file.data && file.mimeType) {
  contentParts.push({
    inlineData: {
      data: file.data,
      mimeType: file.mimeType
    }
  });
}

// Pass the combined array (Text + Document) to Gemini
const result = await model.generateContent(contentParts);

Enter fullscreen mode Exit fullscreen mode

Native Browser Accessibility

To ensure the app is accessible to users who may not be comfortable typing long legal queries, I integrated the native Web Speech API directly into Angular.

This provides zero-cost, client-side Voice-to-Text and Text-to-Speech (TTS), fully localized for East African accents.

// Voice Recognition (Speech-to-Text)
const { webkitSpeechRecognition }: any = window;
this.recognition = new webkitSpeechRecognition();
this.recognition.lang = this.isSwahili ? 'sw-KE' : 'en-KE'; 

this.recognition.onresult = (event: any) => {
  this.zone.run(() => {
    this.userInput = event.results[0][0].transcript;
  });
};

// Reading Legal Advice Aloud (Text-to-Speech)
speak(text: string) {
  const cleanText = text.replace(/[*#]/g, ''); // Strip markdown
  const utterance = new SpeechSynthesisUtterance(cleanText);
  window.speechSynthesis.speak(utterance);
}

Enter fullscreen mode Exit fullscreen mode

By dynamically swapping between sw-KE (Swahili-Kenya) and en-KE (English-Kenya), the dictation engine perfectly captures local dialects and legal terminology.

Conclusion

Building LegalTech requires a defensive engineering mindset. By combining Angular's robust service layer for context injection with Firebase Cloud Functions and Gemini's strict system prompting, SheriaSenseEA manages to provide highly accurate, multimodal legal guidance without bleeding into general-purpose AI hallucinations.

Live Demo: [https://sheriasense-app.web.app/]

GitHub Repo: [https://github.com/Maina-Duncan/SheriaSense-EA]

Top comments (0)