DEV Community

Cover image for How to Integrate Voice AI with Twilio: My Experience with Voice APIs
CallStack Tech
CallStack Tech

Posted on • Originally published at callstack.tech

How to Integrate Voice AI with Twilio: My Experience with Voice APIs

How to Integrate Voice AI with Twilio: My Experience with Voice APIs

TL;DR

Most voice AI integrations fail when real-time transcription lags or barge-in interrupts cause audio overlap. This guide shows how to wire Twilio Media Streams to VAPI's WebSocket layer, handle partial transcripts without race conditions, and implement turn-taking logic that actually works. Stack: Twilio (media transport), VAPI (AI orchestration), Node.js (session management). Result: sub-200ms latency, clean interruptions, production-ready voice bot.

Prerequisites

API Keys & Credentials

You'll need a Twilio Account SID and Auth Token from your Twilio console. Generate an API Key (not your master credentials—use a scoped key for production). If using VAPI, grab your VAPI API Key from the dashboard. Store these in a .env file; never hardcode them.

SDK & Runtime Requirements

Node.js 16+ (or your preferred runtime). Install twilio SDK (npm install twilio) and axios or native fetch for HTTP calls. You'll also need ngrok or similar tunneling tool to expose your local webhook endpoint during development.

System & Network Setup

A WebSocket-capable server (Express.js works fine). Twilio Media Streams requires TLS 1.2+. Ensure your firewall allows inbound HTTPS on port 443. For real-time transcription, you need stable internet (latency under 200ms matters).

Knowledge Assumptions

Familiarity with REST APIs, async/await, and basic webhook handling. Understanding of audio formats (PCM 16kHz, mulaw) helps but isn't mandatory.

VAPI: Get Started with VAPI → Get VAPI

Step-by-Step Tutorial

Architecture & Flow

flowchart LR
    A[User Call] --> B[Twilio Voice]
    B --> C[Media Streams WebSocket]
    C --> D[Your Server]
    D --> E[VAPI Assistant]
    E --> F[Real-time Transcription]
    F --> D
    D --> C
    C --> B
    B --> A
Enter fullscreen mode Exit fullscreen mode

The integration breaks into two distinct layers: Twilio handles telephony (inbound calls, PSTN routing, media transport), while VAPI processes voice AI (STT, LLM, TTS). Your server bridges them via WebSocket streams. Most failures happen at this boundary—buffer mismatches, audio format conflicts, race conditions between transcription and synthesis.

Configuration & Setup

Twilio side: Purchase a phone number, configure webhook to point at your server's /voice endpoint. The webhook receives POST requests when calls arrive—this is where you inject TwiML to start Media Streams.

VAPI side: Create an assistant with specific audio requirements: encoding: "mulaw", sampleRate: 8000 (Twilio's format). Mismatch here = garbled audio. Set transcriber.provider: "deepgram" for lowest latency (60-120ms vs 200-400ms for alternatives).

// Assistant config - VAPI expects this exact format
const assistantConfig = {
  model: {
    provider: "openai",
    model: "gpt-4",
    temperature: 0.7,
    systemPrompt: "You are a helpful assistant handling phone calls."
  },
  voice: {
    provider: "11labs",
    voiceId: "21m00Tcm4TlvDq8ikWAM",
    stability: 0.5,
    similarityBoost: 0.75
  },
  transcriber: {
    provider: "deepgram",
    model: "nova-2",
    language: "en"
  },
  // CRITICAL: Must match Twilio's audio format
  firstMessage: "Hello, how can I help you today?",
  endCallMessage: "Thank you for calling. Goodbye.",
  recordingEnabled: true
};
Enter fullscreen mode Exit fullscreen mode

Step-by-Step Implementation

Step 1: Handle Inbound Calls

When Twilio receives a call, it hits your webhook. Return TwiML that starts a Media Stream pointing at your WebSocket server:

const express = require('express');
const app = express();

app.post('/voice', (req, res) => {
  const twiml = `<?xml version="1.0" encoding="UTF-8"?>
<Response>
  <Connect>
    <Stream url="wss://your-server.com/media-stream">
      <Parameter name="callSid" value="${req.body.CallSid}" />
    </Stream>
  </Connect>
</Response>`;

  res.type('text/xml');
  res.send(twiml);
});
Enter fullscreen mode Exit fullscreen mode

Step 2: Bridge WebSocket to VAPI

Your server receives raw mulaw audio chunks from Twilio. Forward them to VAPI's WebSocket, receive synthesized audio back, send to Twilio. Critical race condition: If user interrupts mid-sentence, you must flush VAPI's TTS buffer AND stop sending queued audio to Twilio. Otherwise, old audio plays after the interrupt.

const WebSocket = require('ws');

const wss = new WebSocket.Server({ port: 8080 });

wss.on('connection', (twilioWs) => {
  let vapiWs = null;
  let audioQueue = [];
  let isProcessing = false;

  // Connect to VAPI
  vapiWs = new WebSocket('wss://api.vapi.ai/ws', {
    headers: { 'Authorization': `Bearer ${process.env.VAPI_API_KEY}` }
  });

  vapiWs.on('open', () => {
    vapiWs.send(JSON.stringify({ 
      type: 'start',
      assistantId: process.env.VAPI_ASSISTANT_ID 
    }));
  });

  twilioWs.on('message', (message) => {
    const msg = JSON.parse(message);

    if (msg.event === 'media') {
      // Forward audio to VAPI
      if (vapiWs.readyState === WebSocket.OPEN) {
        vapiWs.send(JSON.stringify({
          type: 'audio',
          data: msg.media.payload // base64 mulaw
        }));
      }
    }

    if (msg.event === 'stop') {
      // Flush buffers on hangup
      audioQueue = [];
      if (vapiWs) vapiWs.close();
    }
  });

  vapiWs.on('message', (data) => {
    const response = JSON.parse(data);

    if (response.type === 'audio') {
      // Send synthesized audio back to Twilio
      twilioWs.send(JSON.stringify({
        event: 'media',
        streamSid: twilioStreamSid,
        media: { payload: response.data }
      }));
    }

    if (response.type === 'interrupt') {
      // User spoke - cancel queued audio
      audioQueue = [];
      isProcessing = false;
    }
  });
});

app.listen(3000);
Enter fullscreen mode Exit fullscreen mode

Error Handling & Edge Cases

Buffer overrun: If VAPI sends audio faster than Twilio consumes it, you'll hear robotic artifacts. Solution: Implement a 200ms sliding window buffer, drop oldest chunks if queue exceeds 10 packets.

Silence detection false positives: Mobile networks inject 100-400ms jitter. VAPI's default 0.3s silence threshold triggers mid-word on LTE. Increase to 0.5s: transcriber.endpointing = 500.

WebSocket reconnection: Both Twilio and VAPI can drop connections. Implement exponential backoff (1s, 2s, 4s) with max 3 retries. After that, send TwiML <Hangup/> to gracefully end the call.

System Diagram

Audio processing pipeline from microphone input to speaker output.

graph LR
    Start[Incoming Call]
    IVR[Interactive Voice Response]
    VAD[Voice Activity Detection]
    STT[Speech-to-Text]
    NLU[Intent Detection]
    LLM[Response Generation]
    TTS[Text-to-Speech]
    End[Outgoing Call]
    Error[Error Handling]

    Start-->IVR
    IVR-->VAD
    VAD-->STT
    STT-->NLU
    NLU-->LLM
    LLM-->TTS
    TTS-->End

    IVR-->|No Response| Error
    VAD-->|Silence Detected| Error
    STT-->|Unrecognized Speech| Error
    NLU-->|Intent Not Found| Error
    Error-->End
Enter fullscreen mode Exit fullscreen mode

Testing & Validation

Most Voice AI integrations fail in production because developers skip local testing with real audio streams. Here's how to validate your Twilio + VAPI setup before going live.

Local Testing

Use ngrok to expose your webhook server and test with actual phone calls. This catches WebSocket connection issues that curl can't simulate.

// Test webhook endpoint with Twilio's request validator
const twilio = require('twilio');

app.post('/webhook/voice', (req, res) => {
  const authToken = process.env.TWILIO_AUTH_TOKEN;
  const twilioSignature = req.headers['x-twilio-signature'];
  const url = `https://your-domain.ngrok.io/webhook/voice`;

  // Validate webhook signature - CRITICAL for production
  const isValid = twilio.validateRequest(
    authToken,
    twilioSignature,
    url,
    req.body
  );

  if (!isValid) {
    console.error('Invalid Twilio signature');
    return res.status(403).send('Forbidden');
  }

  const twiml = new twilio.twiml.VoiceResponse();
  twiml.connect().stream({ url: 'wss://your-domain.ngrok.io/media' });
  res.type('text/xml').send(twiml.toString());
});
Enter fullscreen mode Exit fullscreen mode

Real-world problem: Twilio's signature validation breaks if your server URL changes (common with ngrok). Always regenerate the signature when switching tunnels.

Webhook Validation

Test WebSocket audio flow by calling your Twilio number and monitoring both connections. Check that media events arrive at 20ms intervals (50 packets/second for mulaw). If you see gaps > 100ms, your server is dropping packets—increase buffer size or switch to async processing.

Validate VAPI responses by logging assistantConfig.transcriber output. False starts happen when silence detection triggers on background noise—bump the threshold from default 0.3 to 0.5 for noisy environments.

Real-World Example

Barge-In Scenario

Most voice bots break when users interrupt mid-sentence. Here's what actually happens: User calls in, bot starts reading a 30-second menu, user says "billing" at second 3. Without proper handling, the bot finishes the menu, THEN processes "billing" — wasting 27 seconds and triggering a hang-up.

The fix requires coordinating THREE systems: Twilio's Media Streams (audio transport), VAPI's STT (speech detection), and your server (state management). When VAPI detects speech during bot output, you must flush Twilio's audio buffer AND cancel VAPI's TTS queue simultaneously. Miss either, and you get overlapping audio or stale responses.

// Handle barge-in when user interrupts bot
wss.on('connection', (ws) => {
  let isBotSpeaking = false;
  let audioQueue = [];

  ws.on('message', (msg) => {
    const data = JSON.parse(msg);

    // VAPI signals user started speaking
    if (data.type === 'transcript' && data.transcriptType === 'partial') {
      if (isBotSpeaking) {
        // CRITICAL: Stop Twilio audio immediately
        audioQueue = []; // Flush buffer
        ws.send(JSON.stringify({
          event: 'clear',
          streamSid: data.streamSid
        }));

        // Cancel VAPI's TTS queue
        vapiWs.send(JSON.stringify({
          type: 'cancel-speech'
        }));

        isBotSpeaking = false;
        console.log(`Barge-in detected: "${data.transcript}"`);
      }
    }

    // Track when bot starts speaking
    if (data.type === 'speech-start') {
      isBotSpeaking = true;
    }
  });
});
Enter fullscreen mode Exit fullscreen mode

Event Logs

Real production logs show the timing chaos. At T+0ms, bot starts TTS. At T+340ms, user says "stop" (partial transcript). At T+380ms, your server receives the event. At T+420ms, Twilio's buffer clears. That's a 420ms window where stale audio plays after the interrupt — users perceive this as the bot "not listening."

The streamSid in Twilio's Media Stream events is your synchronization anchor. Every media event includes it, letting you match audio chunks to specific call legs when handling transfers or conference calls.

Edge Cases

Multiple rapid interrupts: User says "no wait yes" in 2 seconds. Without debouncing, you trigger three separate TTS cancellations, causing race conditions. Solution: 200ms debounce on partial transcripts before flushing buffers.

False positives from background noise: Coffee shop calls trigger barge-in on ambient speech. VAPI's default endpointing threshold (300ms silence) is too sensitive. Increase to 500ms and add keywords filtering to ignore non-command phrases.

Network jitter on mobile: LTE latency spikes cause 800ms+ delays between user speech and your server receiving the transcript. By then, the bot already queued 3 more sentences. Always check timestamp deltas in events — if event.timestamp - lastSpeechStart > 1000, skip the cancellation (too late).

Common Issues & Fixes

Race Conditions Between Twilio and VAPI Streams

The biggest production killer: Twilio's Media Stream sends audio chunks at 20ms intervals while VAPI processes transcription asynchronously. If you don't guard state, you'll get overlapping responses where the bot talks over itself.

// WRONG: No state guard - bot responds twice to same input
wss.on('connection', (ws) => {
  ws.on('message', async (msg) => {
    const data = JSON.parse(msg);
    if (data.event === 'media') {
      const transcript = await vapiWs.send(data.media.payload);
      // Race: Two chunks trigger two responses
    }
  });
});

// CORRECT: State machine prevents double-processing
let isProcessing = false;
let isBotSpeaking = false;

wss.on('connection', (ws) => {
  ws.on('message', async (msg) => {
    const data = JSON.parse(msg);

    if (data.event === 'media' && !isProcessing && !isBotSpeaking) {
      isProcessing = true;
      try {
        const audioChunk = Buffer.from(data.media.payload, 'base64');
        vapiWs.send(audioChunk);
      } finally {
        isProcessing = false;
      }
    }

    // Mark bot as speaking when TTS starts
    if (data.event === 'start' && data.type === 'tts') {
      isBotSpeaking = true;
    }
    if (data.event === 'stop' && data.type === 'tts') {
      isBotSpeaking = false;
    }
  });
});
Enter fullscreen mode Exit fullscreen mode

Why this breaks: Twilio sends 50 chunks/second. Without isProcessing, you queue 50 VAPI requests before the first completes. Result: 50 bot responses, $2.50 in wasted API calls, user hears garbled overlapping audio.

Audio Buffer Not Flushed on Barge-In

VAPI's transcriber.endpointing detects interruptions, but Twilio's Media Stream doesn't auto-flush its outbound buffer. Old TTS audio keeps playing for 200-400ms after the user speaks.

// Flush Twilio's audio queue when VAPI detects interruption
vapiWs.on('message', (event) => {
  const data = JSON.parse(event);

  if (data.type === 'transcript' && data.detected === 'user-interrupt') {
    // Clear any queued audio chunks
    audioQueue.length = 0;
    isBotSpeaking = false;

    // Send silence to force Twilio to stop playback
    const silence = Buffer.alloc(320).toString('base64'); // 20ms mulaw silence
    wss.clients.forEach(client => {
      client.send(JSON.stringify({
        event: 'media',
        media: { payload: silence }
      }));
    });
  }
});
Enter fullscreen mode Exit fullscreen mode

Production impact: Without this, users hear 300ms of stale bot speech after interrupting. Feels broken. Silence injection forces immediate cutoff.

Webhook Signature Validation Failures

Twilio signs webhooks with HMAC-SHA1. If your url includes query params or you're behind a proxy that rewrites paths, validation fails silently.

const twilio = require('twilio');

app.post('/webhook/twilio', (req, res) => {
  const authToken = process.env.TWILIO_AUTH_TOKEN;
  const twilioSignature = req.headers['x-twilio-signature'];

  // CRITICAL: Use the EXACT URL Twilio called (including https, query params)
  const url = `https://${req.headers.host}${req.originalUrl}`;

  const isValid = twilio.validateRequest(
    authToken,
    twilioSignature,
    url,
    req.body
  );

  if (!isValid) {
    console.error('Signature mismatch. URL used:', url);
    return res.status(403).send('Forbidden');
  }

  // Process webhook...
});
Enter fullscreen mode Exit fullscreen mode

Why this fails: If you use req.url instead of req.originalUrl, you miss query params. If you hardcode http:// but Twilio calls https://, signature fails. Always log the url variable when debugging.

Complete Working Example

This is the full production server that bridges Twilio's Media Streams with VAPI's Voice AI. Copy-paste this into server.js and you have a working voice bot that handles real-time transcription, interruptions, and bidirectional audio streaming.

Full Server Code

const express = require('express');
const twilio = require('twilio');
const WebSocket = require('ws');

const app = express();
const port = process.env.PORT || 3000;

// Twilio webhook signature validation
app.use(express.urlencoded({ extended: false }));
app.use(express.json());

const authToken = process.env.TWILIO_AUTH_TOKEN;

// Assistant configuration (matches previous sections)
const assistantConfig = {
  model: {
    provider: "openai",
    model: "gpt-4",
    temperature: 0.7,
    systemPrompt: "You are a helpful voice assistant. Keep responses under 20 seconds."
  },
  voice: {
    provider: "11labs",
    voiceId: "21m00Tcm4TlvDq8ikWAM",
    stability: 0.5,
    similarityBoost: 0.75
  },
  transcriber: {
    provider: "deepgram",
    model: "nova-2",
    language: "en"
  },
  firstMessage: "Hi, how can I help you today?",
  endCallMessage: "Thanks for calling. Goodbye!"
};

// WebSocket server for Twilio Media Streams
const wss = new WebSocket.Server({ noServer: true });

// Session state management (production-grade with TTL)
const sessions = new Map();
const SESSION_TTL = 300000; // 5 minutes

wss.on('connection', (ws) => {
  const sessionId = Date.now().toString();
  let vapiWs = null;
  let audioQueue = [];
  let isProcessing = false;
  let isBotSpeaking = false;

  // Session cleanup after TTL
  const cleanupTimer = setTimeout(() => {
    sessions.delete(sessionId);
    if (vapiWs) vapiWs.close();
    ws.close();
  }, SESSION_TTL);

  sessions.set(sessionId, { ws, vapiWs, cleanupTimer });

  // Connect to VAPI WebSocket
  vapiWs = new WebSocket('wss://api.vapi.ai', {
    headers: {
      'Authorization': `Bearer ${process.env.VAPI_API_KEY}`
    }
  });

  vapiWs.on('open', () => {
    // Send assistant config on connection
    vapiWs.send(JSON.stringify({
      type: 'assistant.config',
      config: assistantConfig
    }));
  });

  // Handle incoming Twilio audio
  ws.on('message', async (msg) => {
    try {
      const data = JSON.parse(msg);

      if (data.event === 'media') {
        const audioChunk = Buffer.from(data.media.payload, 'base64');

        // Barge-in detection: stop bot if user speaks
        if (isBotSpeaking && detectSpeech(audioChunk)) {
          isBotSpeaking = false;
          audioQueue = []; // Flush buffer on interrupt
          vapiWs.send(JSON.stringify({ type: 'interrupt' }));
        }

        // Forward user audio to VAPI
        if (vapiWs.readyState === WebSocket.OPEN) {
          vapiWs.send(JSON.stringify({
            type: 'audio.input',
            audio: data.media.payload
          }));
        }
      }

      if (data.event === 'start') {
        console.log(`Call started: ${data.start.callSid}`);
      }

      if (data.event === 'stop') {
        clearTimeout(cleanupTimer);
        sessions.delete(sessionId);
        if (vapiWs) vapiWs.close();
      }
    } catch (error) {
      console.error('Twilio message error:', error);
    }
  });

  // Handle VAPI responses
  vapiWs.on('message', async (msg) => {
    try {
      const data = JSON.parse(msg);

      if (data.type === 'transcript') {
        const transcript = data.transcript;
        console.log(`User said: ${transcript}`);
      }

      if (data.type === 'audio.output') {
        isBotSpeaking = true;
        // Queue audio chunks for streaming
        audioQueue.push(data.audio);

        if (!isProcessing) {
          isProcessing = true;
          await processAudioQueue(ws, audioQueue);
          isProcessing = false;
        }
      }

      if (data.type === 'call.ended') {
        ws.send(JSON.stringify({ event: 'stop' }));
      }
    } catch (error) {
      console.error('VAPI message error:', error);
    }
  });

  vapiWs.on('error', (error) => {
    console.error('VAPI WebSocket error:', error);
  });

  ws.on('close', () => {
    clearTimeout(cleanupTimer);
    sessions.delete(sessionId);
    if (vapiWs) vapiWs.close();
  });
});

// Process audio queue with backpressure handling
async function processAudioQueue(ws, queue) {
  while (queue.length > 0) {
    const audioChunk = queue.shift();
    ws.send(JSON.stringify({
      event: 'media',
      media: {
        payload: audioChunk
      }
    }));
    // Rate limit to prevent buffer overflow
    await new Promise(resolve => setTimeout(resolve, 20));
  }
}

// Simple VAD for barge-in detection
function detectSpeech(audioChunk) {
  const silence = audioChunk.reduce((sum, byte) => sum + Math.abs(byte - 128), 0) / audioChunk.length;
  return silence > 15; // Threshold tuned for mulaw audio
}

// Twilio webhook endpoint
app.post('/voice', (req, res) => {
  const url = `${req.protocol}://${req.get('host')}${req.originalUrl}`;
  const twilioSignature = req.headers['x-twilio-signature'];

  // Validate webhook signature
  const isValid = twilio.validateRequest(
    authToken,
    twilioSignature,
    url,
    req.body
  );

  if (!isValid) {
    return res.status(403).send('Forbidden');
  }

  const response = new twilio.twiml.VoiceResponse();
  const connect = response.connect();
  connect.stream({ url: `wss://${req.get('host')}/media` });

  res.type('text/xml');
  res.send(response.toString());
});

// Upgrade HTTP to WebSocket
const server = app.listen(port, () => {
  console.log(`Server running on port ${port}`);
});

server.on('upgrade', (request, socket, head) => {
  wss.handleUpgrade(request, socket, head, (ws) => {
    wss.emit('connection', ws, request);
  });
});
Enter fullscreen mode Exit fullscreen mode

Run Instructions

Install dependencies:

npm install express twilio ws
Enter fullscreen mode Exit fullscreen mode

Set environment variables:

export TWILIO_AUTH_TOKEN=your_auth_token
export VAPI_API_KEY=your_vapi_key
export PORT=3000
Enter fullscreen mode Exit fullscreen mode

Start server:

node server.js
Enter fullscreen mode Exit fullscreen mode

Expose with ngrok:

ngrok http 3000
Enter fullscreen mode Exit fullscreen mode

Configure Twilio phone number webhook:
Set "A Call Comes In" to https://your-ngrok-url.ngrok.io/voice (HTTP POST).

**What breaks

FAQ

Technical Questions

What's the actual difference between Twilio Media Streams and VAPI's WebSocket connection?

Twilio Media Streams (wss://media.twilio.com) gives you raw PCM 16kHz audio chunks over WebSocket—you own the transcription, synthesis, and orchestration. VAPI abstracts that layer: you send a call config with model, voice, and transcriber settings, and VAPI handles the audio pipeline internally. Twilio is lower-level control; VAPI is faster to ship. If you need custom VAD thresholds or interrupt logic, Twilio wins. If you need a bot running in 30 minutes, VAPI wins.

How do I validate Twilio webhook signatures in production?

Twilio signs every webhook with HMAC-SHA1. Compute the signature by concatenating the full request URL + form-encoded body, then hash with your authToken. Compare against the X-Twilio-Signature header. If they don't match, reject the request. This prevents replay attacks and spoofed webhooks—non-negotiable in production.

Can I use both Twilio and VAPI in the same call flow?

Yes, but carefully. Route inbound calls through Twilio (cheaper, native SIP), then bridge to VAPI for AI handling via a function call. Don't try to run both transcription engines simultaneously—you'll get race conditions and double-processing costs. Pick one for transcription, one for synthesis.

Performance

Why does my real-time transcription lag 200-400ms?

Network jitter, VAD processing, and STT model latency compound. Twilio's endpointing setting controls silence detection—default is aggressive. Increase the threshold to 0.5 if you're getting false positives. Use partial transcripts (onPartialTranscript) instead of waiting for final results. Chunk audio into 20ms frames instead of buffering 100ms.

How do I prevent audio buffer overruns during barge-in?

Maintain an audioQueue with a max size (e.g., 50 chunks). When user interrupts, flush the queue immediately and set isBotSpeaking = false. If queue hits max, drop oldest chunks instead of blocking. This prevents memory leaks and ensures responsive interrupts.

Platform Comparison

Should I use Twilio or VAPI for a production voice bot?

Twilio: Lower latency (direct SIP), cheaper per minute, full control. Requires you to build transcription, synthesis, and turn-taking logic. VAPI: Faster to deploy, handles orchestration, higher per-minute cost. Use Twilio if you have 10k+ monthly minutes and custom requirements. Use VAPI if you're shipping an MVP or need managed infrastructure.

Does VAPI support custom LLMs or only OpenAI?

VAPI supports OpenAI, Anthropic, and custom endpoints via model.provider. Twilio doesn't handle LLM calls natively—you build that in your server. If you need Claude or a fine-tuned model, VAPI's flexibility wins. If you're already running inference on your own servers, Twilio's raw audio stream is cheaper.

Resources

Twilio: Get Twilio Voice API → https://www.twilio.com/try-twilio

Official Documentation

GitHub & Implementation

Key Specifications

  • WebSocket protocol (RFC 6455) for persistent connections
  • PCM 16-bit audio encoding at 8kHz sample rate
  • OAuth 2.0 for secure API authentication

References

  1. https://www.twilio.com/docs/voice/api
  2. https://www.twilio.com/docs/voice/tutorials
  3. https://www.twilio.com/docs/voice
  4. https://www.twilio.com/docs/voice/quickstart
  5. https://www.twilio.com/docs/voice/quickstart/server
  6. https://www.twilio.com/docs/voice/sdks/javascript/get-started
  7. https://www.twilio.com/docs/voice/quickstart/no-code-voice-studio-quickstart
  8. https://www.twilio.com/docs/voice/sdks/android/get-started
  9. https://www.twilio.com/docs/voice/sdks
  10. https://www.twilio.com/docs/voice/twiml

Top comments (0)