Imagine joining a team meeting but having no idea who’s speaking. Or trying to follow a busy chat thread, only for your screen reader to miss half the messages.
For the 1.3 billion people living with disabilities, this is a daily experience with most communication tools. Real-time apps like chat and video platforms often overlook accessibility, leading to issues such as:
- Dynamic updates that screen readers can’t keep up with
- Visual-only indicators like reactions or hand raises
- Audio-only conversations that exclude deaf or hard-of-hearing users
Building accessible real-time apps means designing for everyone, not just those who can see, hear, or interact in typical ways.
What We're Building
In this article, we'll build a production-ready chat and video application that's fully accessible:
- Real-time video conferencing with accessible controls
- Live captions powered by Stream Video's transcription API
- Text chat with file sharing that works seamlessly with screen readers
- Complete keyboard navigation - no mouse required
- WCAG 2.1 AA compliant - meeting international accessibility standards
Here is a quick demonstration of the seamless keyboard-only flow we will achieve.
Technologies Used
- React with TypeScript
- Stream Video SDK for video conferencing
- Stream Chat SDK for messaging
- Custom accessibility hooks and utilities
- Semantic HTML and ARIA labels
Technical Prerequisite
Before we begin, ensure you have the following:
- Free Stream account
- Node.js 14 or higher installed
- Basic React and TypeScript knowledge
- A general understanding of accessibility principles.
Project Architecture
Setup: Project Scaffolding and Backend Settings
Backend Setup (Token Generation)
While this project primarily involves frontend development, a backend setup is necessary for fundamental functionalities such as generating tokens for user authentication and retrieving user lists.
We’ll start by setting up the backend:
# backend
npm init
npm install express dotenv stream-chat nodemon cors
Next, create a .env file in your project root:
# .env
STREAM_API_KEY=your_stream_api_key
STREAM_API_SECRET=your_stream_api_secret
This is the auth route that involves token generation in the Express server.
// Authentication endpoint
app.post("/auth", async (req, res) => {
const { id, name } = req.body;
try {
await chatServer.upsertUser({ id, name });
const token = chatServer.createToken(id);
res.json({
apiKey: STREAM_API_KEY,
user: { id, name },
token,
});
} catch (err) {
console.error("Auth error:", err);
res.status(500).json({ error: "Authentication failed" });
}
});
Once the code is run, the terminal will look like this:
Frontend Setup
Now, let's create our React application with Vite for fast development:
# Create new Vite + React + TypeScript project
npm create vite@latest accessible-video-chat -- --template react-ts
cd accessible-video-chat
# Install Stream SDKs
npm install stream-chat stream-chat-react @stream-io/video-react-sdk
Next, you will create a .env that will contain the backend URL
VITE_API_BASE_URL="http://localhost:4000"
You might be wondering why we don't include the Stream keys directly in the frontend. The reason is that these keys are securely obtained from the backend through a token. We will discuss the specific hook responsible for this process in the following section.
Accessible Core: Utility Hooks
Understanding Semantic HTML and ARIA
Before diving into the code, let's understand the foundation of accessible web applications.
Semantic HTML provides meaning to content:
-
<article>for self-contained content (messages) -
<time>for timestamps with machine-readable dates -
<button>for interactive elements -
<form>for data submission
ARIA (Accessible Rich Internet Applications) enhances accessibility when semantic HTML isn't enough:
-
roledefines what an element is (e.g.,role="log"for chat history,role="toolbar"for video controls) -
aria-labelprovides a text alternative for screen readers -
aria-liveannounces dynamic content updates -
aria-describedbyassociates descriptive text with elements
Screen Reader Announcement Hook
Screen readers are essential for blind and low-vision users, but they can only announce content that's properly exposed. In real-time apps, where messages, typing indicators, and status changes happen dynamically, proper announcements are the backbone of accessibility.
This pattern centres on ARIA Live Regions, which are hidden elements designed to announce content updates:
-
aria-live="polite": For non-urgent updates (typing indicators, character counts, new messages). -
aria-live="assertive": For critical updates (errors, connection lost, important status changes). -
role="alert": This is an assertive live region for immediate error announcement. -
role="status": This is used for non-critical, continuous updates, such as "Loading users..." -
role="list"/role="listitem": These are used to give structure to a collection of clickable items, helping screen readers understand the total count and navigation context.
This ScreenReaderAnnouncer class creates a hidden DOM element with aria-live, which is completely invisible to sighted users and designed for screen readers only. It utilises the Singleton pattern, ensuring only one instance for the entire application, and directly announces information to assistive technology.
// utils/screenReader.ts
export class ScreenReaderAnnouncer {
private static instance: ScreenReaderAnnouncer;
private container: HTMLDivElement | null = null;
private constructor() {
this.createContainer();
}
static getInstance(): ScreenReaderAnnouncer {
if (!ScreenReaderAnnouncer.instance) {
ScreenReaderAnnouncer.instance = new ScreenReaderAnnouncer();
}
return ScreenReaderAnnouncer.instance;
}
private createContainer(): void {
if (typeof window === 'undefined') return;
this.container = document.createElement('div');
this.container.setAttribute('aria-live', 'polite');
this.container.setAttribute('aria-atomic', 'true');
this.container.className = 'sr-only';
this.container.style.cssText = `
position: absolute;
left: -10000px;
width: 1px;
height: 1px;
overflow: hidden;
`;
document.body.appendChild(this.container);
}
announce(message: string, priority: 'polite' | 'assertive' = 'polite'): void {
if (!this.container) return;
// Update priority if needed
if (this.container.getAttribute('aria-live') !== priority) {
this.container.setAttribute('aria-live', priority);
}
// Clear previous message
this.container.textContent = '';
// Add new message after brief delay
setTimeout(() => {
if (this.container) {
this.container.textContent = message;
}
}, 10);
// Clear after announcement
setTimeout(() => {
if (this.container) {
this.container.textContent = '';
}
}, 1000);
}
}
// React hook wrapper
export const useScreenReader = () => {
const announcer = ScreenReaderAnnouncer.getInstance();
return {
announce: (message: string, priority?: 'polite' | 'assertive') =>
announcer.announce(message, priority)
};
};
useFocusManager.ts Hook
This custom hook gives precise and programmatic control over keyboard focus in the application. Keyboard focus refers to the active element on the screen targeted to receive user input (e.g., keystrokes). For users relying on keyboards or screen readers, the path of this focus (indicated by the visible outline) must be predictable. When standard browser focus gets lost or jumps randomly, the application becomes unusable.
The hook’s primary purpose is to overcome the limitations of the standard browser tabbing, especially when dealing with elements like modals, sidebars, or when switching between major UI sections(like the chat and video views).
This is carried out through these four distinct utility functions:
-
saveFocus()remembers the focused element before opening a modal or dialogue. -
restoreFocus()returns focus to that element after closing the modal. -
manageFocus()programmatically sets focus on an element. -
trapFocus()keeps focus within a modal, preventing tabbing outside of it.
// hooks/useFocusManager.ts
import { useRef, useCallback } from 'react';
import type { FocusManager } from '../types/accessibility';
export const useFocusManager = (): FocusManager => {
const previousFocusRef = useRef<HTMLElement | null>(null);
const saveFocus = useCallback((): void => {
previousFocusRef.current = document.activeElement as HTMLElement;
}, []);
const restoreFocus = useCallback((): void => {
if (previousFocusRef.current && typeof previousFocusRef.current.focus === 'function') {
previousFocusRef.current.focus();
}
}, []);
const manageFocus = useCallback((element: HTMLElement | null): void => {
if (element && typeof element.focus === 'function') {
element.focus();
}
}, []);
const trapFocus = useCallback((containerElement: HTMLElement | null): (() => void) | void => {
if (!containerElement) return () => {};
const focusableElements = containerElement.querySelectorAll<HTMLElement>(
'button, [href], input, select, textarea, [tabindex]:not([tabindex="-1"])'
);
const firstElement = focusableElements[0];
const lastElement = focusableElements[focusableElements.length - 1];
const handleKeyDown = (event: KeyboardEvent): void => {
if (event.key !== 'Tab') return;
if (event.shiftKey) {
if (document.activeElement === firstElement) {
event.preventDefault();
lastElement.focus();
}
} else {
if (document.activeElement === lastElement) {
event.preventDefault();
firstElement.focus();
}
}
};
containerElement.addEventListener('keydown', handleKeyDown);
// Focus first element
if (firstElement) {
firstElement.focus();
}
return (): void => {
containerElement.removeEventListener('keydown', handleKeyDown);
};
}, []);
return {
saveFocus,
restoreFocus,
manageFocus,
trapFocus
};
};
useStreamConnection.ts Hook
This custom hook manages the connection to Stream's Chat and Video services by handling user authentication with a backend server, making POST requests to the /auth endpoint, and managing connection states such as isConnecting and error.
It initialises both StreamChat and StreamVideoClient instances, connects authenticated users to both services, and returns these connected clients for use throughout the application. Additionally, the hook provides robust error handling, including error state management, throwing errors for failed connections, and offering type-safe error messages.
// hooks/useStreamConnection.ts
import { useState, useCallback } from 'react';
import { StreamChat } from 'stream-chat';
import { StreamVideoClient } from '@stream-io/video-react-sdk';
import type { StreamUser, AuthResponse } from '../types';
useAccessibilitySettings.ts Hook
This hook helps in the automatic detection of the user’s system accessibility preferences, including reduced motion, high contrast mode, and basic screen reader usage.
// hooks/useAccessibilitySettings.ts
import { useState, useEffect, useCallback } from 'react';
import type { AccessibilitySettings } from '../types/accessibility';
useCallManagement.ts Hook
This custom hook contains all meeting join/leave logic, handles transcription setup, and also ensures proper cleanup on meeting exit.
// hooks/useCallManagement.ts
import { useState, useCallback } from 'react';
import type { StreamVideoClient, Call } from '@stream-io/video-react-sdk';
import { cleanupMediaTracks, disableCallDevices, enableCallDevices } from '../utils';
import { DEVICE_ENABLE_DELAY } from '../utils/constants';
Building the Application Shell
Before users can chat or video call, they need to authenticate and navigate the application. Let's build the foundational components that tie everything together.
Authentication Flow
Users must sign up or sign in to access any features. The authentication system has three components:
1. AuthView - The Container
This component is the central component that manages switching between signUp and signIn modes.
Its key accessibility features include role="main" for primary page content, and role="region" with aria-labelledby to group related form content.
// components/Auth/AuthView.tsx
import React, { useState } from 'react';
import { LoginForm } from './LoginForm';
import { SignupForm } from './SignupForm';
import type { AuthMode, Credentials } from '../../types';
2. Signup and Login Form
Forms use strong accessibility patterns for clarity, validation, and immediate error feedback.
Input Accessibility Pattern: Fields are clearly linked to their labels. aria-required="true" indicates to screen readers that a field is mandatory, allowing users to understand validation requirements immediately.
Assertive Error Pattern: Errors are announced instantly after a failed submission. The error message is contained within a container with the role="alert" attribute. This is an ARIA Live Region with assertive priority that interrupts the user to ensure they hear and fix the error immediately.
<form onSubmit={onSubmit} className="auth-form" aria-label="Sign up form">
<label className="field">
<span>User ID</span>
<input
className="input"
placeholder="Enter user id"
value={credentials.id}
onChange={e => onChange({ ...credentials, id: e.target.value })}
required
aria-required="true"
disabled={isLoading}
/>
</label>
<label className="field">
<span>Name</span>
<input
className="input"
placeholder="Enter display name"
value={credentials.name}
onChange={e => onChange({ ...credentials, name: e.target.value })}
required
aria-required="true"
disabled={isLoading}
/>
</label>
<button
type="submit"
className="button primary"
disabled={isLoading || !credentials.id || !credentials.name}
>
{isLoading ? 'Connecting...' : 'Create account'}
</button>
{error && (
<div role="alert" className="error-text">
{error}
</div>
)}
</form>
Navigation Hub
The HubView is responsible for communicating feature status accessibility. This means it enforces a clear hierarchy of information: it uses role="navigation" to clearly identify the main links, and it ensures secondary information (like the "Live Captions Available" indicator) is handled correctly.
This indicator is placed in an element with role="status", which is a polite ARIA Live Region that provides a subtle announcement to the screen reader, informing the user without interrupting their current task of reading or navigating.
// Hub/HubView.tsx - Status Announcement Snippet
// ... (inside the component's JSX)
<div className="user-welcome">
<h2>Welcome, {userName}</h2>
{transcriptionAvailable && (
<span
className="feature-badge"
// ARIA: role="status" is a polite live region
role="status"
>
Live Captions Available
</span>
)}
</div>
// ... (rest of the component)
Building Accessible Chat UI
With the application foundation in place, let's build the chat interface.
The chat experience has three stages:
- Select a user to chat with (
UserSelectView.tsx) - View message history (
MessageList.tsx) - Send new messages (
MessageInput.tsx)
We’ll build each stage with full accessibility support.
User Selection - Choosing Who to Chat With
When users click "Chat with user" from the hub, they must select the user they want to chat with.
The UserSelectView.tsx component handles this. The component ensures clear status feedback. When fetching users, we use a polite live region (role="status") to announce 'Loading users...'. If the fetch fails, the error message is placed inside a container with the role="alert" to trigger an assertive announcement.
// components/UserSelect/UserSelectView.tsx
return (
<section className="panel" aria-label="Select a user to chat">
<h2 className="panel-title">Choose a user</h2>
{error && (
<div className="error-text" role="alert">
{error}
</div>
)}
<div className="user-list" role="list">
{!users.length && !isLoading && (
<div className="empty-state">
<p>Load available users to start chatting</p>
<button
className="button primary"
onClick={loadUsers}
type="button"
>
Load users
</button>
</div>
)}
{isLoading && (
<div className="hint" role="status" aria-live="polite">
Loading users...
</div>
)}
{filteredUsers.map(user => (
<button
key={user.id}
className="user-item"
role="listitem"
onClick={() => handleUserSelect(user)}
aria-label={`Chat with ${user.name || user.id}`}
type="button"
>
{user.image ? (
<img
src={user.image}
alt=""
className="user-avatar"
aria-hidden="true"
/>
) : (
<span className="user-avatar" aria-hidden="true">
{getUserInitial(user)}
</span>
)}
<span className="user-name">{user.name || user.id}</span>
</button>
))}
</div>
<div className="panel-actions">
<button
className="button"
onClick={onBack}
type="button"
>
Back
</button>
</div>
</section>
);
};
Message List: Semantic Structure with ARIA
The MessageList.tsx implements the crucial Live Log Accessibility Pattern and advanced keyboard review.
-
Live Log Pattern: The main message container is assigned
role="log". This is essential for dynamic chat, as it tells screen readers that new, ordered content (messages) will be added frequently. This ensures new messages are announced politely without interrupting the user's focus on the input field. -
Keyboard Navigation: The list container is made focusable (
tabIndex={0}) and includes custom logic that enables users to navigate the entire message history using the vertical Arrow Keys (Up/Down), as well as the Home and End keys to jump to the beginning or end of the conversation. -
Accessibility Labelling: The container uses
aria-describedbyto link to a hidden element containing instructions for keyboard navigation.
// components/AccessibleChat/MessageList.tsx
import React, { useRef, useCallback, useState, useEffect } from 'react';
import { useScreenReader } from '../../hooks';
import type { AccessibleMessage } from '../../types';
import { MessageItem } from './MessageItem';
interface AccessibleMessageListProps {
messages: AccessibleMessage[];
client?: any;
typingUsers?: string[];
}
export const AccessibleMessageList: React.FC<AccessibleMessageListProps> = ({
messages,
client,
typingUsers = []
}) => {
const listRef = useRef<HTMLDivElement>(null);
const [selectedMessageIndex, setSelectedMessageIndex] = useState<number>(-1);
const lastMessageIdRef = useRef<string>('');
const { announce } = useScreenReader();
// Announce new messages to screen readers
useEffect(() => {
const latestMessage = messages[messages.length - 1];
if (!latestMessage || latestMessage.id === lastMessageIdRef.current) return;
lastMessageIdRef.current = latestMessage.id || '';
// Don't announce own messages
if (latestMessage.user.id !== client?.user?.id) {
const userName = latestMessage.user.name || 'Unknown user';
const messageText = latestMessage.text || 'sent an attachment';
announce(`New message from ${userName}: ${messageText}`, 'polite');
}
}, [messages, client, announce]);
// Keyboard navigation for messages
const handleKeyDown = useCallback((event: React.KeyboardEvent<HTMLDivElement>) => {
const messageElements = listRef.current?.querySelectorAll<HTMLDivElement>('.message-item');
if (!messageElements || messageElements.length === 0) return;
switch (event.key) {
case 'ArrowUp':
event.preventDefault();
setSelectedMessageIndex(prev => {
const newIndex = Math.max(0, prev === -1 ? messageElements.length - 1 : prev - 1);
messageElements[newIndex]?.focus();
messageElements[newIndex]?.scrollIntoView({ block: 'nearest', behavior: 'smooth' });
return newIndex;
});
break;
case 'ArrowDown':
event.preventDefault();
setSelectedMessageIndex(prev => {
const newIndex = prev === -1 ? 0 : Math.min(messageElements.length - 1, prev + 1);
messageElements[newIndex]?.focus();
messageElements[newIndex]?.scrollIntoView({ block: 'nearest', behavior: 'smooth' });
return newIndex;
});
break;
case 'Home':
event.preventDefault();
setSelectedMessageIndex(0);
messageElements[0]?.focus();
messageElements[0]?.scrollIntoView({ block: 'start', behavior: 'smooth' });
break;
case 'End':
event.preventDefault();
const lastIndex = messageElements.length - 1;
setSelectedMessageIndex(lastIndex);
messageElements[lastIndex]?.focus();
messageElements[lastIndex]?.scrollIntoView({ block: 'end', behavior: 'smooth' });
break;
}
}, []);
return (
<div className="accessible-message-list">
{/* Hidden live region for screen reader announcements */}
<div
aria-live="polite"
aria-atomic="false"
className="sr-only"
id="message-announcements"
/>
<div
ref={listRef}
className="message-list-container"
role="log"
aria-label={`Chat messages, ${messages.length} total. Use arrow keys to navigate.`}
onKeyDown={handleKeyDown}
tabIndex={0}
aria-describedby="navigation-help"
>
<div id="navigation-help" className="sr-only">
Use arrow keys to navigate messages, Enter to select, Home and End to jump to first or last message
</div>
{messages.map((message, index) => (
<MessageItem
key={message.id || `message-${index}`}
message={message}
client={client}
isSelected={index === selectedMessageIndex}
onSelect={() => setSelectedMessageIndex(index)}
/>
))}
{/* Typing indicator */}
{typingUsers.length > 0 && (
<div className="typing-indicator" aria-live="polite" role="status">
<div className="typing-animation" aria-hidden="true">
<span></span>
<span></span>
<span></span>
</div>
<span className="typing-text">
{typingUsers.length === 1
? `${typingUsers[0]} is typing...`
: `${typingUsers.join(', ')} are typing...`
}
</span>
</div>
)}
</div>
</div>
);
};
Individual Message Item Structure
The MessageItem focuses on ensuring a rich semantic context for each entry in the chat log.
-
Semantic Structure: Each message uses
role="article", defining it as a self-contained, independent piece of content. -
Logical Association: It uses the
aria-labelledbyandaria-describedbyattributes to explicitly link the message content and timestamp (<time>) back to the author's name, guaranteeing the screen reader announces a complete, coherent unit. -
Timestamp Clarity: The timestamp is provided using the semantic
<time>element with a machine-readabledateTimeattribute, while anaria-labelprovides a human-friendly reading of the time. -
Decorative Images: The avatar images use
alt=""andaria-hidden="true", correctly marking them as decorative, since the author's name is already announced viaaria-labelledby.
// components/AccessibleChat/MessageItem.tsx
import React, { useRef, useCallback, useMemo, memo } from 'react';
import type { AccessibleMessage } from '../../types';
import { AttachmentComponent } from './AttachmentComponent';
import { EnhancedText } from './EnhancedText';
interface MessageItemProps {
message: AccessibleMessage;
client?: any;
isSelected?: boolean;
onSelect?: () => void;
readBy?: string[];
}
export const MessageItem: React.FC<MessageItemProps> = memo(({
message,
client,
isSelected,
onSelect,
readBy = []
}) => {
const messageRef = useRef<HTMLDivElement>(null);
const isOwn = message.user.id === client?.user?.id;
const createdAt = useMemo(() =>
message.created_at ? new Date(message.created_at) : null,
[message.created_at]
);
const avatarUrl = useMemo(() =>
message.user.image ||
`https://api.dicebear.com/7.x/initials/svg?seed=${encodeURIComponent(message.user.name || message.user.id)}`,
[message.user.image, message.user.name, message.user.id]
);
const handleInteraction = useCallback((e: React.MouseEvent | React.KeyboardEvent) => {
if ('key' in e && e.key !== 'Enter' && e.key !== ' ') return;
e.preventDefault();
onSelect?.();
}, [onSelect]);
return (
<div
ref={messageRef}
className={`message-item ${isOwn ? 'own-message' : 'other-message'} ${isSelected ? 'selected' : ''}`}
role="article"
aria-labelledby={`message-${message.id}-author`}
aria-describedby={`message-${message.id}-content message-${message.id}-time`}
tabIndex={0}
onClick={handleInteraction}
onKeyDown={handleInteraction}
>
{/* Message author with avatar */}
<div
id={`message-${message.id}-author`}
className="message-author"
aria-label={`Message from ${message.user.name || message.user.id}`}
>
<img
src={avatarUrl}
alt=""
className="user-avatar"
aria-hidden="true"
width="32"
height="32"
/>
<span>{message.user.name || message.user.id}</span>
</div>
{/* Message content */}
<div
id={`message-${message.id}-content`}
className="message-content"
>
{message.text && (
<div className="message-text">
<EnhancedText text={message.text} messageId={message.id || ''} />
</div>
)}
{/* Attachments */}
{message.attachments?.map((attachment, index) => (
<AttachmentComponent
key={`${message.id}-attachment-${index}`}
attachment={attachment}
messageId={message.id || ''}
/>
))}
</div>
{/* Message footer with timestamp and read receipts */}
<div className="message-footer">
<time
id={`message-${message.id}-time`}
className="message-timestamp"
dateTime={createdAt?.toISOString() || ''}
aria-label={`Sent at ${createdAt?.toLocaleString() || 'Unknown time'}`}
>
{createdAt?.toLocaleTimeString() || ''}
</time>
{isOwn && (
<div
className="message-status"
aria-label={readBy.length > 0 ? `Read by ${readBy.join(', ')}` : 'Sent'}
>
<span className="read-status" aria-hidden="true">
{readBy.length > 0 ? '✓✓' : '✓'}
</span>
</div>
)}
</div>
</div>
);
});
MessageItem.displayName = 'MessageItem';
Accessible Message Input with File Attachments
The MessageInput is designed to be highly predictable and to announce errors, preventing confusion between keyboard and screen readers immediately.
-
Form Semantics: The component utilises the native
<form>element with cleararia-labelsfor its overall purpose and provides status feedback on actions (e.g., "Message sent successfully") via theannounce()hook. -
Error Reporting: When validation fails (e.g., empty message), the error is placed in a container with
role="alert", triggering an assertive announcement to notify the user of the failure immediately. The component also sets thearia-invalidattribute on the text area to flag the input itself as having an error. -
Accessibility Labelling & Help: The main text area utilises a hidden
<label>(sr-only) and is linked to hidden instructions usingaria-describedby, which guides keyboard users on shortcuts such as Enter (send) and Shift+Enter (new line). -
Character Count: The character count feature utilises
aria-live="polite"to notify users when they are approaching the character limit, but only after they pause typing, thereby preventing overwhelming feedback.
// components/AccessibleChat/MessageInput.tsx
import React, { useState, useRef, useCallback } from 'react';
import { useScreenReader } from '../../utils/screenReader';
import './ChatStyles.css';
interface AccessibleMessageInputProps {
onSubmit: (message: string, attachments?: File[]) => Promise<void>;
maxLength?: number;
maxFileSize?: number;
allowedFileTypes?: string[];
}
interface AttachmentUpload {
file: File;
type: 'image' | 'video' | 'audio' | 'file';
}
const DEFAULT_MAX_LENGTH = 1000;
const DEFAULT_MAX_FILE_SIZE = 10 * 1024 * 1024; // 10MB
export const AccessibleMessageInput: React.FC<AccessibleMessageInputProps> = ({
onSubmit,
maxLength = DEFAULT_MAX_LENGTH,
maxFileSize = DEFAULT_MAX_FILE_SIZE,
allowedFileTypes = ['image/*', 'video/*', 'audio/*', '.pdf', '.doc', '.docx']
}) => {
const [message, setMessage] = useState<string>('');
const [errors, setErrors] = useState<string[]>([]);
const [isSubmitting, setIsSubmitting] = useState<boolean>(false);
const [attachments, setAttachments] = useState<AttachmentUpload[]>([]);
const inputRef = useRef<HTMLTextAreaElement>(null);
const fileInputRef = useRef<HTMLInputElement>(null);
const { announce } = useScreenReader();
const handleSubmit = useCallback(async (event: React.FormEvent<HTMLFormElement>) => {
event.preventDefault();
setErrors([]);
if (!message.trim() && attachments.length === 0) {
const error = 'Message or attachment required';
setErrors([error]);
inputRef.current?.focus();
announce(error, 'assertive');
return;
}
try {
setIsSubmitting(true);
await onSubmit(message, attachments.map(a => a.file));
setMessage('');
setAttachments([]);
announce('Message sent successfully', 'polite');
} catch (error) {
const errorMessage = error instanceof Error ? error.message : 'Failed to send message';
setErrors([errorMessage]);
announce(errorMessage, 'assertive');
inputRef.current?.focus();
} finally {
setIsSubmitting(false);
}
}, [message, attachments, onSubmit, announce]);
const handleKeyDown = useCallback((event: React.KeyboardEvent<HTMLTextAreaElement>) => {
// Enter to send, Shift+Enter for new line
if (event.key === 'Enter' && !event.shiftKey) {
event.preventDefault();
const form = event.currentTarget.closest('form');
if (form) {
form.dispatchEvent(new Event('submit', { bubbles: true, cancelable: true }));
}
}
// Escape to clear
if (event.key === 'Escape') {
setMessage('');
setErrors([]);
announce('Message cleared', 'polite');
}
}, [announce]);
const formatFileSize = (bytes: number): string => {
if (bytes === 0) return '0 Bytes';
const k = 1024;
const sizes = ['Bytes', 'KB', 'MB', 'GB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(2))} ${sizes[i]}`;
};
const characterCount = message.length;
const isOverLimit = characterCount > maxLength;
const isNearLimit = characterCount > maxLength * 0.9;
return (
<form
className="accessible-message-input"
onSubmit={handleSubmit}
role="form"
aria-label="Send message"
>
{/* Error messages */}
{errors.length > 0 && (
<div
className="error-messages"
role="alert"
aria-live="assertive"
>
{errors.map((error, index) => (
<div key={index} className="error-message">
{error}
</div>
))}
</div>
)}
{/* Attachment preview */}
{attachments.length > 0 && (
<div
className="attachment-preview"
role="list"
aria-label={`${attachments.length} selected attachment${attachments.length !== 1 ? 's' : ''}`}
>
{attachments.map((attachment, index) => (
<div
key={`${attachment.file.name}-${index}`}
className="attachment-item"
role="listitem"
>
<span className="attachment-info">
<span className="attachment-name">{attachment.file.name}</span>
<span className="attachment-size">
({formatFileSize(attachment.file.size)})
</span>
</span>
<button
type="button"
onClick={() => {
setAttachments(prev => {
const newAttachments = [...prev];
newAttachments.splice(index, 1);
announce(`Removed attachment ${attachment.file.name}`, 'polite');
return newAttachments;
});
}}
aria-label={`Remove ${attachment.file.name}`}
className="remove-attachment"
>
×
</button>
</div>
))}
</div>
)}
{/* Message textarea */}
<div className="input-container">
<label htmlFor="message-input" className="sr-only">
Type your message
</label>
<textarea
id="message-input"
ref={inputRef}
value={message}
onChange={(e) => setMessage(e.target.value)}
onKeyDown={handleKeyDown}
placeholder="Type your message... (Enter to send, Shift+Enter for new line)"
className={`message-textarea ${isOverLimit ? 'over-limit' : ''}`}
aria-describedby="message-input-help character-count"
aria-invalid={errors.length > 0}
maxLength={maxLength}
rows={1}
disabled={isSubmitting}
/>
<div id="message-input-help" className="sr-only">
Enter to send message, Shift+Enter for new line, Escape to clear
</div>
</div>
{/* Action buttons */}
<div className="input-actions">
<button
type="submit"
className="send-button"
disabled={(!message.trim() && attachments.length === 0) || isSubmitting || isOverLimit}
aria-label={isSubmitting ? "Sending message..." : "Send message"}
>
{isSubmitting ? 'Sending...' : 'Send'}
</button>
{/* Hidden file input */}
<input
ref={fileInputRef}
type="file"
multiple
accept={allowedFileTypes.join(',')}
onChange={(e) => {
const files = Array.from(e.target.files || []);
// Add files logic here
}}
className="sr-only"
aria-label="Add attachment"
tabIndex={-1}
id="file-input"
/>
<button
type="button"
className="attachment-button"
aria-label="Add attachment"
onClick={() => fileInputRef.current?.click()}
disabled={isSubmitting}
>
<span aria-hidden="true">📎</span>
</button>
</div>
{/* Character count */}
<div className="message-info">
<div
id="character-count"
className={`character-count ${isNearLimit ? 'warning' : ''} ${isOverLimit ? 'error' : ''}`}
aria-live="polite"
aria-label={`${characterCount} of ${maxLength} characters used`}
>
{characterCount}/{maxLength}
</div>
</div>
</form>
);
};
Handling Emojis Accessibly
The EnhancedText.tsx component implements a crucial, advanced pattern to make visual emojis understandable for screen readers.
By default, a screen reader might read an emoji as a cryptic sequence of characters or just "image," which lacks context and obscures the message's true tone. To address this accessibility issue, the component identifies emojis within the text and wraps them in a <span> element. It then applies role="img" along with a descriptive aria-label (gotten from an EMOJI_MAP).
As a result, when a user encounters text like "Great job! 🎉," the screen reader announces "Great job! party popper" instead of an unrecognisable symbol, ensuring universal understanding of the message's content and tone.
// components/AccessibleChat/EnhancedText.tsx
import React, { useMemo, memo } from 'react';
import { EMOJI_MAP } from '../../utils/constants';
interface EnhancedTextProps {
text: string;
messageId: string;
}
export const EnhancedText: React.FC<EnhancedTextProps> = memo(({ text, messageId }) => {
const parts = useMemo(() => {
// Split text on emoji characters
return text.split(/([\u{1F600}-\u{1F64F}]|[\u{1F300}-\u{1F5FF}]|[\u{1F680}-\u{1F6FF}]|[\u{1F1E0}-\u{1F1FF}]|[\u{2600}-\u{26FF}]|[\u{2700}-\u{27BF}])/u);
}, [text]);
return (
<>
{parts.map((part, index) => {
// If part is an emoji, add aria-label
if (EMOJI_MAP[part]) {
return (
<span key={`${messageId}-emoji-${index}`} role="img" aria-label={EMOJI_MAP[part]}>
{part}
</span>
);
}
return <span key={`${messageId}-text-${index}`}>{part}</span>;
})}
</>
);
});
EnhancedText.displayName = 'EnhancedText';
// utils/constants.ts
export const EMOJI_MAP: Record<string, string> = {
'😀': 'grinning face',
'😂': 'face with tears of joy',
'❤️': 'red heart',
'👍': 'thumbs up',
'👎': 'thumbs down',
'🎉': 'party popper',
'🔥': 'fire',
'💯': 'hundred points symbol'
};
Below is an image showing the messages sent between two users.
Build Inclusive Video
Meeting Management
The Meeting component manages the entire video call lifecycle, including creating, joining, and displaying the active call interface.
It has four states: Pre-call (for ID input/join/create), Active call (with full video interface), Loading (disables buttons), and Error (displaying accessible messages).
Accessibility as a key focus is achieved through:
- Conditional rendering for a simple UI.
- Screen reader announcements for the "Generate ID" button.
- Properly labelled form inputs.
-
role="alert"for immediate error feedback androle="note"for supplementary info.
A consistent "Back" button (except during loading).
// components/Meeting/MeetingView.tsx
// The announcement hook integration is the key logic to keep.
const handleGenerateId = () => {
const id = `meeting-${Math.random().toString(36).slice(2, 7)}`;
setMeetingId(id);
announce(`Generated meeting ID: ${id}`, 'polite');
};
// The main JSX returns the form when there is NO active call
return (
<section className="panel" aria-label="Start or join a meeting">
<h2 className="panel-title">Start or join a meeting</h2>
{transcriptionAvailable && (
<div className="feature-notice"
// ARIA: role="note" marks this as supplementary information
role="note"
>
This meeting will support live captions powered by Stream Video closed captions.
</div>
)}
<div className="meeting-form">
{/* Input Field with Proper Labeling */}
<label className="field">
<span>Meeting ID</span>
<input
className="input"
placeholder="Enter meeting ID (or generate one)"
value={meetingId}
onChange={e => setMeetingId(e.target.value)}
// ARIA: Labeling is essential for screen readers
aria-label="Meeting ID"
disabled={isLoading}
/>
</label>
<div className="buttons-row">
{/* Primary Action Button: Dynamic label based on meetingId state */}
<button
className="button primary"
onClick={() => meetingId ? onJoinMeeting(meetingId) : handleGenerateId()}
disabled={isLoading}
type="button"
>
{/* Dynamic text for loading state and action */}
{isLoading ? 'Starting...' : (meetingId ? 'Start Meeting' : 'Generate ID')}
</button>
{/* Secondary Action Button (Join Meeting) */}
<button
className="button secondary"
onClick={() => onJoinMeeting(meetingId)}
disabled={!meetingId || isLoading}
type="button"
>
{isLoading ? 'Joining...' : 'Join Meeting'}
</button>
</div>
</div>
{/* Error Display */}
{error && (
<div
// ARIA: role="alert" ensures immediate, assertive screen reader announcement
role="alert"
className="error-text"
>
{error}
</div>
)}
</section>
);
Custom Accessible Video Controls
The VideoControls implement the Toolbar Accessibility Pattern and dynamic status feedback.
- The container uses
role="toolbar"and relies on custom JavaScript logic to enable Arrow Key navigation between buttons. - Each button uses an
aria-labelthat dynamically announces the action (e.g., "Turn on microphone") and anaria-pressedattribute to confirm the current state (muted/unmuted). Toggling a button calls theannounce()hook for polite confirmation.
// components/AccessibleVideo/VideoControls.tsx
import React, { useState, useRef, useEffect, useCallback } from 'react';
import { useCallStateHooks, type Call } from '@stream-io/video-react-sdk';
import { useScreenReader } from '../../hooks';
interface VideoControlsProps {
call: Call;
onToggleFullscreen: () => void;
onLeaveCall: () => Promise<void>;
onToggleCaptions: () => void;
captionsEnabled: boolean;
captionsSupported: boolean;
}
export const AccessibleVideoControls: React.FC<VideoControlsProps> = ({
call,
onToggleFullscreen,
onLeaveCall,
onToggleCaptions,
captionsEnabled,
captionsSupported
}) => {
const {
useCameraState,
useMicrophoneState,
useScreenShareState
} = useCallStateHooks();
const { camera, isMute: isCameraMuted } = useCameraState();
const { microphone, isMute: isMicMuted } = useMicrophoneState();
const { screenShare, isMute: isScreenShareMuted } = useScreenShareState();
const [isToggling, setIsToggling] = useState({
mic: false,
camera: false,
screen: false
});
const controlsRef = useRef<HTMLDivElement>(null);
const { announce } = useScreenReader();
// Keyboard navigation between controls
const navigateControls = useCallback((direction: number) => {
const buttons = controlsRef.current?.querySelectorAll<HTMLButtonElement>('button:not(:disabled)');
if (!buttons) return;
const currentIndex = Array.from(buttons).findIndex(btn => btn === document.activeElement);
const nextIndex = Math.max(0, Math.min(buttons.length - 1, currentIndex + direction));
buttons[nextIndex]?.focus();
}, []);
useEffect(() => {
const handleKeyDown = (event: KeyboardEvent) => {
if (!controlsRef.current?.contains(document.activeElement)) return;
switch (event.key) {
case 'ArrowLeft':
case 'ArrowRight':
event.preventDefault();
navigateControls(event.key === 'ArrowLeft' ? -1 : 1);
break;
case 'Enter':
case ' ':
event.preventDefault();
(document.activeElement as HTMLButtonElement)?.click();
break;
}
};
document.addEventListener('keydown', handleKeyDown);
return () => document.removeEventListener('keydown', handleKeyDown);
}, [navigateControls]);
const handleToggleMicrophone = useCallback(async (): Promise<void> => {
if (isToggling.mic) return;
try {
setIsToggling(prev => ({ ...prev, mic: true }));
await microphone.toggle();
announce(
isMicMuted ? 'Microphone turned on' : 'Microphone turned off',
'polite'
);
} catch (error) {
announce('Failed to toggle microphone', 'assertive');
console.error('Failed to toggle microphone:', error);
} finally {
setIsToggling(prev => ({ ...prev, mic: false }));
}
}, [isToggling.mic, microphone, isMicMuted, announce]);
const handleToggleCamera = useCallback(async (): Promise<void> => {
if (isToggling.camera) return;
try {
setIsToggling(prev => ({ ...prev, camera: true }));
await camera.toggle();
announce(
isCameraMuted ? 'Camera turned on' : 'Camera turned off',
'polite'
);
} catch (error) {
announce('Failed to toggle camera', 'assertive');
console.error('Failed to toggle camera:', error);
} finally {
setIsToggling(prev => ({ ...prev, camera: false }));
}
}, [isToggling.camera, camera, isCameraMuted, announce]);
const handleToggleScreenShare = useCallback(async (): Promise<void> => {
if (isToggling.screen) return;
try {
setIsToggling(prev => ({ ...prev, screen: true }));
await screenShare.toggle();
announce(
isScreenShareMuted ? 'Screen sharing started' : 'Screen sharing stopped',
'polite'
);
} catch (error) {
announce('Failed to toggle screen share', 'assertive');
console.error('Failed to toggle screen share:', error);
} finally {
setIsToggling(prev => ({ ...prev, screen: false }));
}
}, [isToggling.screen, screenShare, isScreenShareMuted, announce]);
const handleLeaveCall = useCallback(async (): Promise<void> => {
const confirmed = window.confirm('Are you sure you want to leave this call?');
if (!confirmed) return;
try {
announce('Leaving the call...', 'polite');
await onLeaveCall();
} catch (error) {
console.error('Error leaving call:', error);
announce('Error leaving call', 'assertive');
}
}, [onLeaveCall, announce]);
return (
<div
ref={controlsRef}
className="video-controls"
role="toolbar"
aria-label="Video call controls. Use arrow keys to navigate, Enter to activate."
>
<div className="primary-controls">
{/* Microphone control */}
<button
className={`control-button mic-control ${isMicMuted ? 'muted' : 'active'}`}
onClick={handleToggleMicrophone}
aria-label={isMicMuted ? 'Turn on microphone' : 'Turn off microphone'}
aria-pressed={!isMicMuted}
disabled={isToggling.mic}
type="button"
>
<span aria-hidden="true">
{isToggling.mic ? '⏳' : (isMicMuted ? '🔇' : '🎤')}
</span>
<span className="control-text">
{isToggling.mic ? 'Toggling...' : (isMicMuted ? 'Mic Off' : 'Mic On')}
</span>
</button>
{/* Camera control */}
<button
className={`control-button camera-control ${isCameraMuted ? 'muted' : 'active'}`}
onClick={handleToggleCamera}
aria-label={isCameraMuted ? 'Turn on camera' : 'Turn off camera'}
aria-pressed={!isCameraMuted}
disabled={isToggling.camera}
type="button"
>
<span aria-hidden="true">
{isToggling.camera ? '⏳' : (isCameraMuted ? '📹' : '📷')}
</span>
<span className="control-text">
{isToggling.camera ? 'Toggling...' : (isCameraMuted ? 'Camera Off' : 'Camera On')}
</span>
</button>
{/* Screen share control */}
<button
className={`control-button screen-share-control ${!isScreenShareMuted ? 'active' : ''}`}
onClick={handleToggleScreenShare}
aria-label={isScreenShareMuted ? 'Start screen sharing' : 'Stop screen sharing'}
aria-pressed={!isScreenShareMuted}
disabled={isToggling.screen}
type="button"
>
<span aria-hidden="true">
{isToggling.screen ? '⏳' : '🖥️'}
</span>
<span className="control-text">
{isToggling.screen ? 'Toggling...' : (isScreenShareMuted ? 'Share Screen' : 'Stop Sharing')}
</span>
</button>
{/* Leave call button */}
<button
className="control-button end-call"
onClick={handleLeaveCall}
aria-label="Leave call"
type="button"
>
<span aria-hidden="true">📞</span>
<span className="control-text">Leave</span>
</button>
</div>
<div className="secondary-controls">
{/* Fullscreen toggle */}
<button
className="control-button fullscreen-button"
onClick={onToggleFullscreen}
aria-label="Toggle fullscreen"
type="button"
>
<span aria-hidden="true">⛶</span>
<span className="control-text">Fullscreen</span>
</button>
{/* Captions toggle */}
<button
className={`control-button captions-button ${captionsEnabled ? 'active' : ''}`}
onClick={onToggleCaptions}
aria-label={
!captionsSupported
? 'Live captions not supported'
: captionsEnabled
? 'Turn off live captions'
: 'Turn on live captions'
}
aria-pressed={captionsEnabled}
disabled={!captionsSupported}
type="button"
>
<span aria-hidden="true">
{!captionsSupported ? '📝❌' : '📝'}
</span>
<span className="control-text">
{!captionsSupported
? 'N/A'
: captionsEnabled
? 'Captions On'
: 'Captions'
}
</span>
</button>
</div>
</div>
);
};
Live Captions with Stream's Transcription API
The TranscriptDisplay implements the Supplementary Live Region Pattern.
- It uses
role="complementary"to mark the captions displayed as secondary content related to the video. - The main transcript area uses
aria-live="polite"andaria-atomic="false", ensuring that new captions are announced smoothly without requiring the screen reader to read the entire transcript history every time a new word appears. - Each caption includes speaker identification, and auto-scrolling keeps the latest captions visible. Users receive status feedback, informing them whether captions are active, loading, or in error.
// components/AccessibleVideo/TranscriptDisplay.tsx
import React, { useRef, useEffect } from 'react';
export interface TranscriptData {
sessionId: string;
text: string;
userId?: string;
timestamp: number;
isFinal: boolean;
speaker?: string;
}
interface TranscriptDisplayProps {
transcripts: TranscriptData[];
enabled: boolean;
status: 'idle' | 'starting' | 'active' | 'error';
}
export const TranscriptDisplay: React.FC<TranscriptDisplayProps> = ({
transcripts,
enabled,
status
}) => {
const captionsRef = useRef<HTMLDivElement>(null);
// Auto-scroll to latest caption
useEffect(() => {
if (captionsRef.current && transcripts.length > 0) {
captionsRef.current.scrollTop = captionsRef.current.scrollHeight;
}
}, [transcripts]);
const getStatusMessage = (): string => {
switch (status) {
case 'starting':
return 'Starting live captions...';
case 'active':
return 'Listening for speech... Powered by Stream Video closed captions.';
case 'error':
return 'Caption error. Try toggling captions off and on again.';
default:
return 'Click the captions button to enable live closed captions.';
}
};
if (!enabled) return null;
return (
<div
className="live-captions-container"
role="complementary"
aria-label="Live captions"
aria-live="polite"
aria-atomic="false"
>
<div
ref={captionsRef}
className="captions-content"
>
{transcripts.length > 0 ? (
transcripts.map((transcript) => (
<div
key={`${transcript.sessionId}-${transcript.timestamp}`}
className={`caption-item ${transcript.isFinal ? 'final' : 'interim'}`}
aria-label={`${transcript.speaker} said: ${transcript.text}`}
>
<strong className="caption-speaker" aria-hidden="true">
{transcript.speaker}:
</strong>
<span className="caption-text">
{' '}{transcript.text}
</span>
</div>
))
) : (
<div className="caption-placeholder">
<span className="caption-text">
{getStatusMessage()}
</span>
</div>
)}
</div>
<div className="captions-status" aria-live="polite">
<span className="sr-only">
Stream captions status: {status}
</span>
</div>
</div>
);
};
Caption Implementation in VideoContainer
The VideoContainer component is responsible for setting up and managing the real-time transcription feed, ensuring two primary accessibility points:
-
Accessible Toggle Feedback: The
toggleCaptionsfunction uses theannounce()hook to provide immediate feedback on critical state changes. If captions are toggled on, it announces: "Starting live captions..." (polite). If the Stream API reports an error, it announces: "Failed to toggle captions" (assertive). -
Real-Time Caption Announcement: The component uses a
useEffecthook to listen directly to the Stream SDK's closed caption events (call.on('call.closed_caption', ...)). When a new transcription event arrives, the handler performs three critical steps:- It associates the caption text with the correct speaker's name.
- It updates the visible
TranscriptDisplaycomponent. - It immediately calls the
announce()hook to narrate the speaker and caption text (e.g., "Participant: Welcome to the meeting"), ensuring deaf or hard-of-hearing users following the transcript via a screen reader get timely updates.
// components/AccessibleVideo/VideoContainer.tsx (excerpt)
const toggleCaptions = useCallback(async () => {
if (!transcriptionSupported) {
announce('Live captions not supported', 'assertive');
return;
}
if (!call) {
announce('No active call for captions', 'assertive');
return;
}
try {
setTranscriptionStatus('starting');
if (captionsEnabled) {
await call.stopClosedCaptions();
announce('Stopping live captions...', 'polite');
} else {
// Start captions with English language
await call.startClosedCaptions({ language: 'en' });
announce('Starting live captions...', 'polite');
}
} catch (error) {
console.error('Captions toggle error:', error);
setTranscriptionStatus('error');
const errorMessage = error instanceof Error ? error.message : 'Failed to toggle captions';
announce(errorMessage, 'assertive');
}
}, [call, captionsEnabled, transcriptionSupported, announce]);
// Handle caption events
useEffect(() => {
if (!call) return;
const handleClosedCaption = (event: any) => {
const closedCaption = event.closed_caption || {};
const captionText = closedCaption.text || '';
const speakerId = closedCaption.speaker_id || '';
if (!captionText.trim()) return;
const participant = participants.find(p =>
p.userId === speakerId || p.sessionId === speakerId
);
const speakerName = participant?.name || 'Participant';
const newTranscript: TranscriptData = {
sessionId: speakerId || `session-${Date.now()}`,
text: captionText.trim(),
userId: speakerId,
timestamp: Date.now(),
isFinal: true,
speaker: speakerName
};
setTranscripts(prev => [...prev, newTranscript].slice(-10));
announce(`${speakerName}: ${captionText}`, 'polite');
};
call.on('call.closed_caption', handleClosedCaption);
return () => {
call.off('call.closed_caption', handleClosedCaption);
};
}, [call, participants, announce]);
Below is an image of two users having video meetings with caption enabled:
Error State Handling
Accessible error announcements are critical for real-time applications. The VideoContainer monitors for connection issues and ensures instant, interruptive feedback. When a connection is lost, the handler triggers an obvious error banner and uses an assertive announcement. When the connection returns, it uses a polite announcement to confirm recovery. The error message container itself uses role="alert" and aria-live="assertive" to ensure maximum visibility for screen readers.
// In VideoContainer.tsx
const [connectionError, setConnectionError] = useState<string | null>(null);
useEffect(() => {
if (!call) return;
const handleConnectionError = (event: any) => {
const errorMessage = 'Connection lost. Attempting to reconnect...';
setConnectionError(errorMessage);
announce(errorMessage, 'assertive');
};
const handleReconnected = () => {
setConnectionError(null);
announce('Connection restored', 'polite');
};
call.on('call.session_participant_left', handleConnectionError);
call.on('call.session_participant_joined', handleReconnected);
return () => {
call.off('call.session_participant_left', handleConnectionError);
call.off('call.session_participant_joined', handleReconnected);
};
}, [call, announce]);
// Error display component
{connectionError && (
<div
className="error-banner"
role="alert"
aria-live="assertive"
>
<span role="img" aria-label="Error">⚠️</span>
<span>{connectionError}</span>
</div>
)}
Testing Accessibility
Keyboard-Only Testing
Keyboard-only testing is the single most critical manual validation step. It verifies that your custom Toolbar and Message List navigation logic works and ensures no user is trapped or confused by missing focus states.
The video below demonstrates the application's keyboard-only usage.
Lighthouse Checks
Lighthouse is an open-source, automated tool developed by Google that audits web page quality.
The Accessibility Score shown in the image confirms the page passes all automated checks for foundational accessibility. A score of 100 validates the following aspects of your code:
- ARIA Compliance: All custom roles, states, and labels were implemented correctly.
- Color Contrast: Text and background colors meet the strict WCAG 2.1 legibility standards.
- Semantic Structure: The correct HTML elements and heading hierarchies were utilized.
Complete Production Code
The code examples in this article are simplified for readability and focus on key accessibility concepts. For the complete, production-ready implementation with all features, check out this repository.
Conclusion: Accessibility and Performance Trade-Offs
This project demonstrates how real-time chat and video applications can be made fully accessible by combining semantic HTML, dynamic ARIA updates, keyboard-first navigation, and custom screen reader announcements. Features like the announcement hook and accessible toolbar patterns ensure that users receive timely feedback and can operate the interface without relying on sight or a mouse.
Building for accessibility requires extra attention from managing focus manually to testing ARIA attributes and ensuring that screen readers interpret rapid updates correctly. However, the result is a more inclusive and dependable experience for every user.











Top comments (0)