DEV Community

Cover image for AI✧Debate
Nadine
Nadine Subscriber

Posted on

AI✧Debate

AssemblyAI Voice Agents Challenge: Domain Expert

This is a submission for the AssemblyAI Voice Agents Challenge

What I Built

Ever wished for a partner to sharpen your arguments?

Enter the AI Debate Room to meet an AI Opponent that provides real-time performance and domain-expert, topic-aware counter-arguments.

AI Debate uses semantic similarity to match your arguments against a living knowledge base of philosophical ideas. Relevant content is retrieved, assembled into context, and an LLM crafts a concise rebuttal from the opposing view.


Demo

📚 RAG-Powered Debates: AI responses backed by philosophical knowledge base
🔍 Source Attribution: AI responses are linked to their philosophical sources
🧠 Contextual Arguments: AI draws from curated philosophical content
📖 Argument Preparation: Direct access to philosophical concepts


GitHub Repository

GitHub logo nadinev6 / AIDebate

Built for the AssemblyAI Hackathon

AI Debate

Your Real-Time AI Opponent for Philosophical Arguments

AI Debate is a real-time, browser-based platform where you can argue your ideas and the AI will push back with sharp, curated philosophical counter-arguments. Powered by a structured knowledge base and Retrieval-Augmented Generation (RAG).

The aim was to create an AI opponent that evolves with you, sharpening its arguments and, in turn, your intellect, the longer you engage.

🚀 Current Status

The project is fully functional, with all core features for real-time, RAG-powered philosophical debates implemented.

🏗️ Architecture

ai-debate-partner/
├── backend/                 # Python FastAPI backend
│   ├── main.py             # FastAPI server and routes
│   ├── config.py           # Configuration management
│   ├── agents/             # LiveKit agents for real-time voice
│   ├── requirements.txt    # Python dependencies
│   └── __init__.py         # Package initialisation
├── frontend/               # HTML/CSS/JavaScript frontend
│   ├── index.html         # Main application page
│   ├── style.css          # Responsive styles
│   ├── script.js

Technical Implementation & AssemblyAI Integration

This agent connects to LiveKit rooms and provides real-time AI debate responses. It integrates AssemblyAI for speech-to-text, the existing RAG system for generating philosophical counter-arguments, and Cartesia for text-to-speech.

♒︎ Processing Flow

  • AssemblyAI for speech-to-text
  • LiveKit for real-time audio streaming
  • FAISS for vector similarity search
  • OpenAI GPT-3.5-turbo for text generation
  • Sentence Transformers for embeddings
  • OpenAI TTS/Cartesia for text-to-speech

🎤 User-controlled Audio Streaming

Connects to LiveKit room but does not start sending audio yet. 'Start Voice Debate' allows user to control when the app starts streaming audio.

 <AIVoiceInput
            onStart={handleStartVoiceInteraction} // Handles both backend session and LiveKit mic start
            onStop={handleStopVoiceInteraction}   // Handles both LiveKit mic stop and backend session end
            isMicActive={liveKitIsMicActive}      // Pass LiveKit's mic status     
            isConnecting={voiceSession !== null && !isLiveKitConnected && !liveKitIsMicActive}
            demoMode={false}
          />
Enter fullscreen mode Exit fullscreen mode

The console logs can be used to verify that the AI agent received the audio.


🧠 Context-Aware Conversation Memory

Conversation history is maintained in the server's memory for the duration of a single LiveKit session, ensuring context-aware debates.

Here is a snippet of the debate agent that runs as a separate process from the FastAPI server:

class DebateLiveKitAgent(Agent):
    def __init__(self, debate_api_client: DebateAgent):
        super().__init__(
            instructions=(
                "You are a sophisticated AI philosopher engaged in a real-time debate. "
                "Your role is to provide thoughtful, well-reasoned counter-arguments to "
                "the user's positions. Draw upon philosophical traditions and thinkers "
                "to challenge their assumptions. Be respectful but intellectually rigorous. "
                "Keep your responses concise and engaging for spoken conversation."
            ),
            stt=assemblyai.STT(
                api_key=settings.ASSEMBLYAI_API_KEY,
                sample_rate=16000,
            ),
            llm=openai.LLM(
                model=settings.LLM_MODEL,
                api_key=settings.OPENAI_API_KEY
            ),
            # Use OpenAI TTS to test or Cartesia
            tts=openai.TTS(
                api_key=settings.OPENAI_API_KEY,
                model="tts-1",
                voice="alloy",
            ),

            vad=silero.VAD.load()
        )

        self.debate_api_client = debate_api_client
Enter fullscreen mode Exit fullscreen mode

📚 Structured Knowledge Base

The application uses a pre-trained Sentence Transformer called all-MiniLM-L6-v2 embeddings model, which is set up for RAG components. The .md files contain the domain-expert knowledge. They are loaded, chunked, and converted into embeddings. The embeddings are then organised into a searchable FAISS index folder.

User Guidelines:

  • At the start, the AI response time depends on the complexity of the input.
  • The user is advised to narrow their topic, as a broader topic retrieves a larger number of chunks, which takes more time to process.
  • The user can take advantage of the multimodal capability to focus the debate.

🎤💭👥Multimodal

💭Thinking: The user is also able to test and prepare their arguments, and view a counter-argument via text chat before engaging in a voice debate.

  • AI agent sends the same text via a data channel to the frontend
  • Frontend receives the argument, responds and saves the transcription

🎤Speaking: The AI's spoken responses will be audible through your speakers. Text debate & user session data is available for export.

  • User speaks → AssemblyAI transcribes → RAG generates counter-argument
  • AI agent speaks the counter-argument via OpenAI TTS

📊 Performance Logs

For performance insights, key metrics such as response time, confidence scores for all AI responses, success/failure rates are automatically logged and stored in JSON format:

{"timestamp": 1753357776.4760807, "datetime": "2025-07-24 13:49:36", "response_time_seconds": 45.069, "confidence_score": 0.85, "message_length": 50, "success": true, "error": null}

For failed responses the relevant error message will be displayed in the console and logged:

"Error code: 429 - 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.'
Enter fullscreen mode Exit fullscreen mode

🏁🎉 End

The aim was to create an AI opponent that evolves with you, sharpening its arguments and, in turn, your intellect, the longer you engage.

Top comments (0)