In part one of this two-part series, we focused on setting up our project, building the tab layout, and creating the home screens for our Signal clone. We also covered:
Integrating user authentication with Clerk
Installing Stream’s Chat SDK to support chat messaging
In this second part, we’ll add screens that complete the chat experience, such as editing your profile, starting a new conversation, creating group chats, or finding someone by their username.
We’ll also build a fully functional chat screen with real-time messaging using the Stream React Native Chat SDK and implement a call screen using the Stream React Native Video and Audio SDK, enabling users to have real-time voice and video conversations.
You can download the APK and IPA builds or access the full code on GitHub.
Let’s get started!
Building the Modal Screens
At this point, users can sign up and view chats, but we’re still missing some essential features:
Users can’t edit their profile
There’s no way to start a new chat or create a group
To solve this, we’ll introduce a (modal) group that provides separate screens for these actions.
Updating the Home Layout
First, let’s update the (home)/_layout.tsx file to include the (modal) group:
...
const HomeLayout = () => {
...
return (
<OverlayProvider>
<Chat client={chatClient!}>
<Stack>
<Stack.Screen
name="(modal)"
options={{
presentation: 'modal',
headerShown: false,
}}
/>
...
</Stack>
</Chat>
</OverlayProvider>
);
};
export default HomeLayout;
Creating the Modal Layout
Next, let’s set up the modal layout.
Inside the (home) directory, create a (modal) folder, then add a _layout.tsx file to it with the following code:
import { Feather } from '@expo/vector-icons';
import { Stack, useRouter } from 'expo-router';
import Button from '@/components/Button';
const ModalLayout = () => {
const router = useRouter();
return (
<Stack
screenOptions={{
headerBackground: () => null,
headerTintColor: 'black',
headerBackButtonDisplayMode: 'minimal',
headerTitleAlign: 'center',
}}
>
<Stack.Screen
name="profile"
options={{
title: 'Profile',
headerLeft: () => (
<Button
variant="plain"
onPress={() => router.back()}
className="right-4"
>
<Feather name="chevron-left" size={32} />
</Button>
),
}}
/>
<Stack.Screen
name="new-message"
options={{
title: 'New Message',
headerLeft: () => (
<Button variant="text" onPress={() => router.back()}>
Cancel
</Button>
),
}}
/>
<Stack.Screen
name="new-group"
options={{
title: 'Select Members',
}}
/>
<Stack.Screen
name="find-by-username"
options={{
title: 'Find by Username',
}}
/>
</Stack>
);
};
export default ModalLayout;
Here, we set up a Stack navigator for our modal screens. We also apply a centered, transparent header to all screens, while the profile and new-message screens receive custom header buttons.
Building the Profile Screen
Now that we've added our modal layout, let’s work on the first modal screen: the profile screen.
The profile screen will allow users to:
Upload or remove a profile image
Edit their first and last names
Edit their username
We'll also ensure any profile updates are synced with Clerk and Stream, so the user's details remain consistent across the app.
First, let's create a component to handle the profile image.
In the components directory, create an ImageInput.tsx file and add the following code:
import { Feather } from '@expo/vector-icons';
import * as ImagePicker from 'expo-image-picker';
import { useEffect } from 'react';
import { Alert, Pressable } from 'react-native';
import Avatar from './Avatar';
interface ImageInputProps {
name?: string;
imageUri: string | null;
onChangeImage: (file: ImagePicker.ImagePickerAsset | null) => void;
}
function ImageInput({ name, imageUri, onChangeImage }: ImageInputProps) {
useEffect(() => {
requestPermission();
}, []);
const requestPermission = async () => {
try {
const { granted } =
await ImagePicker.requestMediaLibraryPermissionsAsync();
if (!granted)
alert('You need to enable permission to access the library');
} catch (error) {
console.error('Error requesting media library permissions:', error);
}
};
const handlePress = () => {
if (!imageUri) selectImage();
else
Alert.alert('Delete', 'Are you sure you want to delete this image?', [
{
text: 'Yes',
onPress: () => onChangeImage(null),
},
{ text: 'No' },
]);
return;
};
const selectImage = async () => {
try {
const result = await ImagePicker.launchImageLibraryAsync({
mediaTypes: ['images'],
quality: 1,
});
if (!result.canceled) {
onChangeImage(result.assets[0]);
return;
}
} catch (error) {
console.error('Error selecting image:', error);
}
};
return (
<Pressable onPress={handlePress} className="items-center justify-center">
<Avatar
imageUrl={imageUri!}
size={100}
fontSize={40}
name={name || 'User'}
placeholderType={name ? 'text' : 'icon'}
/>
{imageUri && (
<Feather
color="#ffffff78"
name="edit-2"
size={40}
className="absolute"
/>
)}
</Pressable>
);
}
export default ImageInput;
The ImageInput component uses Expo's ImagePicker to let users select an image for their profile.
Here’s how it works:
When the component mounts, it calls
requestPermissionto ask the user for access to their media library.The component returns a
Pressablethat wraps around anAvatar. If an avatar image exists, an edit icon is displayed over the image to indicate it’s clickable.-
When the user taps the image:
- If no image exists, it opens the media library so the user can select one.
- If an image does exist, it shows a confirmation alert asking if the user wants to delete it.
Next, let’s build our profile screen. Create a profile.tsx file in the (modal) folder with the following code:
import { useClerk, useUser } from '@clerk/clerk-expo';
import clsx from 'clsx';
import { ImagePickerAsset } from 'expo-image-picker';
import { useState } from 'react';
import { Text, TextInput, View } from 'react-native';
import { useChatContext } from 'stream-chat-expo';
import Button from '@/components/Button';
import ImageInput from '@/components/ImageInput';
import Screen from '@/components/Screen';
import TextField from '@/components/TextField';
import useUserForm from '@/hooks/useUserForm';
import { getError } from '@/lib/utils';
const ProfileScreen = () => {
const { user } = useUser();
const { client } = useChatContext();
const clerk = useClerk();
const usernameParts = user?.username?.split('_')!;
const initialFormValues = {
firstName: user?.firstName!,
lastName: user?.lastName!,
username: usernameParts[0],
usernameNumber: usernameParts[1],
};
const defaultImage: ImagePickerAsset = {
uri: user?.hasImage ? user?.imageUrl : '',
width: 100,
height: 100,
};
const {
firstName,
lastName,
username,
usernameNumber,
numberError,
onChangeFirstName,
onChangeLastName,
onChangeUsername,
onChangeNumber,
} = useUserForm(initialFormValues);
const [profileImage, setProfileImage] =
useState<ImagePickerAsset>(defaultImage);
const [loading, setLoading] = useState(false);
const submitDisabled =
loading || !username || !usernameNumber || !firstName || !lastName;
const updateProfile = async () => {
try {
setLoading(true);
const finalUsername = `${username}_${usernameNumber}`;
const result = await user?.update({
firstName,
lastName,
username: finalUsername,
});
await client.upsertUser({
id: result?.id!,
name: result?.fullName!,
username: result?.username!,
});
const updateUserImage = async (data: string | null) => {
try {
const imageResult = await clerk.user?.setProfileImage({
file: data,
});
await client.upsertUser({
id: result?.id!,
image: imageResult ? imageResult.publicUrl! : undefined,
});
} catch (error) {
console.error('Error updating user image:', error);
}
};
if (profileImage.uri) {
const response = await fetch(profileImage.uri);
const blob = await response.blob();
const reader = new FileReader();
reader.readAsDataURL(blob);
reader.onloadend = async () => {
const base64data = reader.result as string;
await updateUserImage(base64data);
};
} else {
await updateUserImage(null);
}
alert('Profile updated successfully!');
} catch (error) {
getError(error);
console.error(error);
} finally {
setLoading(false);
}
};
return (
<Screen
viewClassName="pt-3 px-4 items-center gap-6"
loadingOverlay={loading}
>
<View className="items-center gap-3">
<ImageInput
name={user?.fullName!}
imageUri={profileImage.uri}
onChangeImage={(asset) =>
setProfileImage(asset ?? { ...defaultImage, uri: '' })
}
/>
<Text className="text-sm text-gray-400">
{username ? `${username}_${usernameNumber}` : 'Choose your username'}
</Text>
</View>
<View className="gap-3">
<TextField
value={firstName}
placeholder="First name"
onChangeText={onChangeFirstName}
/>
<TextField
value={lastName}
placeholder="Last name"
onChangeText={onChangeLastName}
/>
<View className="relative">
<TextField
autoCapitalize="none"
value={username}
placeholder="Username"
onChangeText={onChangeUsername}
className="pr-12"
/>
<View className="absolute right-3 top-3 flex-row gap-2">
<View className="w-0.5 h-5 bg-gray-300" />
<TextInput
keyboardType="number-pad"
maxLength={2}
value={usernameNumber}
onChangeText={onChangeNumber}
className="w-5 h-5 android:w-8 android:h-12 android:bottom-3.5"
/>
</View>
<Text
className={clsx(
'pl-2 pt-2 text-xs',
numberError ? 'text-red-500' : 'text-gray-500'
)}
>
{numberError ||
'Usernames are always paired with a set of numbers.'}
</Text>
</View>
</View>
<Button onPress={updateProfile} disabled={submitDisabled}>
Save
</Button>
</Screen>
);
};
export default ProfileScreen;
The ProfileScreen handles:
Initializing form values from Clerk's current user object.
Displaying the
firstName,lastName, andusernameform fields and managing their states using theuseUserFormhook.Managing image selection using the
ImageInputcomponent and converting the chosen image into a base64 string during upload.
Once the user taps “Save,” their profile is updated on both Clerk and Stream.
Building a Hook to Manage Contacts
We need access to a user's contact list to develop features like starting a new chat or group conversation. In our app, contacts refer to people our signed-in user has previously interacted with via DMs.
To simplify how we fetch and filter these contacts, we'll build a reusable hook called useContacts.
In the hooks folder, create a useContacts.tsx file with the following code:
import { useEffect, useRef, useState } from 'react';
import { StreamChat, UserResponse } from 'stream-chat';
const useContacts = (
client: StreamChat,
setUsers?: (contacts: UserResponse[]) => void,
fetchContacts: boolean = true
) => {
const [contacts, setContacts] = useState<UserResponse[]>([]);
const [loadingContacts, setLoadingContacts] = useState(true);
const debounceTimeout = useRef<ReturnType<typeof setTimeout> | null>(null);
const cancelled = useRef(false);
useEffect(() => {
const getAllUsers = async () => {
try {
setLoadingContacts(true);
const userId = client.userID!;
const channels = await client.queryChannels({
type: 'messaging',
member_count: 2,
members: { $in: [userId] },
});
const dmChannels = channels.filter((channel) =>
channel.id?.startsWith('!members')
);
const contacts = dmChannels
.map((channel) => {
const members = Object.values(channel.state.members || {});
return (
members.find((m) => m.user_id !== client.userID)?.user || null
);
})
.filter(Boolean) as UserResponse[];
setContacts(contacts);
if (setUsers) {
setUsers(contacts);
}
} catch (error: any) {
console.error('Error fetching contacts:', error);
} finally {
setLoadingContacts(false);
}
};
if (fetchContacts) getAllUsers();
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
const debounceSearch = (
text: string,
reset: () => void,
filterFn: (query: string) => Promise<void> | void
) => {
const query = text.trimStart();
if (!query) {
if (debounceTimeout.current) clearTimeout(debounceTimeout.current);
cancelled.current = true;
reset();
return;
}
cancelled.current = false;
if (debounceTimeout.current) clearTimeout(debounceTimeout.current);
debounceTimeout.current = setTimeout(async () => {
if (cancelled.current) return;
await filterFn(query);
}, 200);
};
return {
contacts,
loadingContacts,
debounceSearch,
};
};
export default useContacts;
This hook handles three main responsibilities:
Fetching Contacts Automatically: On mount, the hook queries all direct message (DM) channels the current user is a part of and extracts the other members as contact suggestions. It filters for channels with two members (
member_count: 2) and checks for channel IDs that follow the!memberspattern, which we use for DM channels.Tracking Loading State: While the contacts are being fetched,
loadingContactsis set totrue.Search with Debouncing: The
debounceSearchfunction allows us to handle user searches (e.g., filtering contacts) without spamming requests on every keystroke. It introduces a200msdelay and is passed to a callback that executes the search logic.
Building the New Message Screen
Now that we have our useContacts hook in place, let’s put it to use by building a New Message screen. This screen allows users to either:
Start a new conversation with someone they've previously chatted with
Navigate to the route for creating a new group chat
Navigate to the route for searching users by their username
Before we create our screen, we need a component to display a user's avatar and name in a list.
Create a UserCard.tsx file in the components folder and add the following code:
import { Text, View } from 'react-native';
import { UserResponse } from 'stream-chat';
import Avatar from './Avatar';
import Button from './Button';
interface UserCardProps {
onPress?: () => void;
user: UserResponse;
children?: React.ReactNode;
}
const UserCard = ({ children, onPress, user }: UserCardProps) => {
// @ts-expect-error - names
const name = user.name || `${user.first_name} ${user.last_name}`;
return (
<Button
variant="plain"
onPress={onPress}
className="bg-white flex-row items-center gap-2 py-3 px-4 rounded-xl"
>
<View className="h-10 w-10">
<Avatar name={name} imageUrl={user?.image} size={40} />
</View>
<View>
<Text className="text-base leading-5">{name}</Text>
</View>
{children}
</Button>
);
};
export default UserCard;
In the code above:
The
UserCardaccepts aUserResponsefrom Stream, along with an optionalonPresscallback.The name is inferred from the
user.namefield or constructed fromfirst_nameandlast_name.When clicked, it triggers the passed-in callback, which is used to initiate a chat.
Next, let’s create the new message screen. Create a new-message.tsx file in the (modal) folder and add the following code:
import { Entypo, Feather, MaterialIcons } from '@expo/vector-icons';
import { Link, useRouter } from 'expo-router';
import { Text, View } from 'react-native';
import { useChatContext } from 'stream-chat-expo';
import Button from '@/components/Button';
import Screen from '@/components/Screen';
import Spinner from '@/components/Spinner';
import UserCard from '@/components/UserCard';
import useContacts from '@/hooks/useContacts';
const NewMessageScreen = () => {
const router = useRouter();
const { client } = useChatContext();
const { contacts, loadingContacts } = useContacts(client);
const onSelectUser = async (userId: string) => {
const channel = client.getChannelByMembers('messaging', {
members: [client.userID!, userId],
});
router.dismissTo({
pathname: '/chat/[id]',
params: { id: channel.id! },
});
};
return (
<Screen viewClassName="pt-1 px-4">
<View className="w-full">
<Link href="/new-group" asChild>
<Button
variant="plain"
className="bg-white flex-row items-center justify-between rounded-t-lg pt-0.5"
>
<View className="px-4">
<MaterialIcons name="people-outline" size={24} color="black" />
</View>
<View className="flex-row flex-grow items-center justify-between gap-2 border-b border-gray-200">
<Text>New Group</Text>
<View className="p-2">
<Entypo name="chevron-small-right" size={24} color="gray" />
</View>
</View>
</Button>
</Link>
<Link href="/find-by-username" asChild>
<Button
variant="plain"
className="bg-white flex-row items-center justify-between rounded-b-lg pb-0.5"
>
<View className="px-4">
<Feather name="at-sign" size={24} color="black" />
</View>
<View className="flex-row flex-grow items-center justify-between gap-2">
<Text>Find by Username</Text>
<View className="p-2">
<Entypo name="chevron-small-right" size={24} color="gray" />
</View>
</View>
</Button>
</Link>
</View>
{loadingContacts && (
<View className="flex items-center justify-center py-4">
<Spinner />
</View>
)}
{!loadingContacts && contacts.length > 0 && (
<View className="flex flex-col gap-2 mt-4">
{contacts.map((contact) => (
<UserCard
key={contact.id}
user={contact}
onPress={() => onSelectUser(contact.id)}
/>
))}
</View>
)}
</Screen>
);
};
export default NewMessageScreen;
Here’s what happens in this screen:
-
At the top, we show two buttons:
- New Group takes the user to the group creation flow.
- Find by Username allows searching for users not in your existing contacts.
While the contact list is being fetched, a spinner is shown.
-
Once loaded, the contact list is shown using the
UserCardcomponent. Tapping a user:- Attempts to retrieve a DM channel with just that user.
- Navigates the user directly into the chat screen using the channel ID.
Find by Username Screen
Sometimes we want to start a conversation with someone new, even if they aren’t in our contact list yet. To achieve this, we’ll build a screen that allows users to look up others by their usernames and initiate a chat.
First, let’s ensure our app can display individual chat views.
In your (home)/_layout.tsx file, add the route for the chat screen:
...
const HomeLayout = () => {
...
return (
<OverlayProvider>
<Chat client={chatClient!}>
<Stack>
...
<Stack.Screen
name="chat/[id]"
options={{
headerShown: false,
}}
/>
</Stack>
</Chat>
</OverlayProvider>
);
};
export default HomeLayout;
Next, create a new chat folder in the (home) directory, then create a [id].tsx file with the following snippet:
import { useLocalSearchParams } from 'expo-router';
import { Text } from 'react-native';
import Screen from '@/components/Screen';
const ChatScreen = () => {
const { id: channelId } = useLocalSearchParams<{ id: string }>();
return (
<Screen className="flex-1 bg-white" viewClassName="pb-safe">
<Text>{channelId}</Text>
</Screen>
);
};
export default ChatScreen;
For now, it simply displays the channel ID passed through the route. We'll come back and improve this later.
Next, let’s build out the screen where users can search by username.
In the (modal) folder, create a find-by-username.tsx file and add the following code:
import { useRouter } from 'expo-router';
import { useState } from 'react';
import { View } from 'react-native';
import { UserResponse } from 'stream-chat';
import { useChatContext } from 'stream-chat-expo';
import Screen from '@/components/Screen';
import Spinner from '@/components/Spinner';
import TextField from '@/components/TextField';
import UserCard from '@/components/UserCard';
import useContacts from '@/hooks/useContacts';
const FindByUsernameScreen = () => {
const { client } = useChatContext();
const router = useRouter();
const [username, setUsername] = useState('');
const [user, setUser] = useState<UserResponse | null>(null);
const [loading, setLoading] = useState(false);
const { debounceSearch } = useContacts(client, undefined, false);
const resetUser = () => {
setUser(null);
};
const search = async (query: string) => {
try {
setLoading(true);
const { users } = await client.queryUsers({
username: { $eq: query },
});
if (users.length > 0) {
setUser(users[0]);
}
} catch (error: any) {
console.error('Error fetching user:', error);
} finally {
setLoading(false);
}
};
const handleUserSearch = async (text: string) => {
setUsername(text);
debounceSearch(text, resetUser, search);
};
const onSelectUser = async (userId: string) => {
const channel = client.getChannelByMembers('messaging', {
members: [client.userID!, userId],
});
if (channel.id) {
router.dismissTo({
pathname: '/chat/[id]',
params: { id: channel.id },
});
} else {
await channel.create();
router.dismissTo({
pathname: '/chat/[id]',
params: { id: channel.data?.id! },
});
}
};
return (
<Screen viewClassName="pt-1 px-4 gap-4">
<TextField
id="username"
placeholder="Username"
value={username}
onChangeText={(value) => handleUserSearch(value)}
autoCapitalize="none"
/>
{loading && (
<View className="flex items-center justify-center py-4">
<Spinner />
</View>
)}
{!loading && user && (
<UserCard user={user} onPress={() => onSelectUser(user.id)} />
)}
</Screen>
);
};
export default FindByUsernameScreen;
In the code above:
We render a
TextFieldto search for usernames.Using the
debounceSearchmethod from ouruseContactshook, we prevent requesting on every keystroke.On valid input, we call
client.queryUserswith theusernamefilter and update theuserstate accordingly.If the user taps on a
UserCard, we check if a channel already exists with the user. If not, we create one and navigate to it.
New Group Screen
Next, let’s add a New Group screen that allows users to create group chats with multiple members.
To do this, we’ll need two things:
Checkboxes so users can select multiple contacts
A way to generate unique group IDs for each chat
We can achieve this by using both expo-checkbox and expo-crypto. Run this command to install them:
npx expo install expo-checkbox expo-crypto
Next, create a new file named UserCheckbox.tsx in your components folder and add the following code:
import Checkbox from 'expo-checkbox';
import { View } from 'react-native';
import { UserResponse } from 'stream-chat';
import UserCard from './UserCard';
interface UserCheckboxProps {
user: UserResponse;
checked: boolean;
onValueChange: (value: boolean) => void;
}
const UserCheckbox = ({ user, checked, onValueChange }: UserCheckboxProps) => {
return (
<UserCard onPress={() => onValueChange(!checked)} user={user}>
<View className="flex items-center ml-auto">
<Checkbox
id={user.id}
value={checked}
onValueChange={onValueChange}
className="size-4 rounded border-2 border-color-borders-input"
/>
</View>
</UserCard>
);
};
export default UserCheckbox;
This component wraps the UserCard we built earlier and adds a checkbox on the right. When a user taps the card or the checkbox, it toggles a selection.
Next, create a new-group.tsx file in the modal folder, and add the following code:
import { getRandomBytesAsync } from 'expo-crypto';
import { useRouter } from 'expo-router';
import { useMemo, useState } from 'react';
import { ActivityIndicator, View } from 'react-native';
import { UserResponse } from 'stream-chat';
import { useChatContext } from 'stream-chat-expo';
import Button from '@/components/Button';
import Screen from '@/components/Screen';
import Spinner from '@/components/Spinner';
import TextField from '@/components/TextField';
import UserCheckbox from '@/components/UserCheckbox';
import useContacts from '@/hooks/useContacts';
const NewGroupScreen = () => {
const { client } = useChatContext();
const router = useRouter();
const [creatingGroup, setCreatingGroup] = useState(false);
const [query, setQuery] = useState('');
const [groupName, setGroupName] = useState('');
const [users, setUsers] = useState<UserResponse[]>([]);
const [selectedUsers, setSelectedUsers] = useState<string[]>([]);
const { contacts, loadingContacts, debounceSearch } = useContacts(
client,
setUsers
);
const resetUsers = () => {
setUsers(contacts);
};
const search = (query: string) => {
const users = contacts.filter((user) => {
// @ts-expect-error - name
const name = user.name || `${user.first_name} ${user.last_name}`;
return (
user.username?.toLowerCase().includes(query.toLowerCase()) ||
name.toLowerCase().includes(query.toLowerCase())
);
});
setUsers(users);
};
const handleUserSearch = (text: string) => {
setQuery(text);
debounceSearch(text, resetUsers, search);
};
const leave = () => {
setCreatingGroup(false);
setGroupName('');
setQuery('');
setSelectedUsers([]);
router.dismissTo('/chats');
};
const createNewGroup = async () => {
if (!groupName) {
alert('Please enter a group name.');
return;
}
if (selectedUsers.length === 0) {
alert('Please select at least one user.');
return;
}
setCreatingGroup(true);
try {
const bytes = await getRandomBytesAsync(7);
const alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890';
const id = Array.from(bytes)
.map((b) => alphabet[b % alphabet.length])
.join('');
const group = client.channel('messaging', id, {
members: [...selectedUsers, client.userID!],
// @ts-expect-error - name
name: groupName,
});
await group.create();
leave();
} catch (error) {
console.error(error);
alert('Error creating group');
} finally {
setCreatingGroup(false);
}
};
const onSelectUser = (userId: string, value: boolean) => {
setSelectedUsers((prevSelectedUsers) => {
if (value) {
return [...prevSelectedUsers, userId];
} else {
return prevSelectedUsers.filter((id) => id !== userId);
}
});
};
const sortedUsers = useMemo(
() =>
users.sort((a, b) => {
const nameA = a.name;
const nameB = b.name;
return nameA?.localeCompare(nameB!)!;
}),
[users]
);
return (
<Screen viewClassName="pt-1 px-4 gap-4">
<TextField
id="groupName"
label="Group Name"
placeholder="Group Name"
value={groupName}
onChangeText={(value) => setGroupName(value)}
/>
<TextField
id="users"
label="Add Members"
placeholder="Who would you like to add?"
value={query}
onChangeText={(value) => handleUserSearch(value)}
autoCapitalize="none"
/>
{loadingContacts && (
<View className="flex items-center justify-center py-4">
<Spinner />
</View>
)}
{!loadingContacts && users.length > 0 && (
<View className="flex flex-col gap-2 mt-2">
{sortedUsers.map((user) => (
<UserCheckbox
key={user.id}
user={user}
checked={selectedUsers.includes(user.id)}
onValueChange={(value) => onSelectUser(user.id, value)}
/>
))}
</View>
)}
<Button
className="mt-auto"
onPress={createNewGroup}
disabled={creatingGroup}
>
{!creatingGroup && 'Create group'}
{creatingGroup && <ActivityIndicator />}
</Button>
</Screen>
);
};
export default NewGroupScreen;
Here’s how it works:
-
The screen renders two input fields using the
TextFieldcomponent:- Group Name: Users must enter a name for the group. This is required before the group can be created.
-
Add Members: This field allows users to search for contacts by username or name. As the user types, a debounced search is performed using the
useContactshook.
The list of users is filtered based on the search input and rendered below the input fields.
Each user is displayed using the
UserCheckboxcomponent.When the user taps on a contact or toggles the checkbox, it updates the
selectedUsersarray by either adding or removing the user’s ID.-
When the “Create group” button is pressed:
- The
createNewGroupfunction runs. - It checks that a group name is provided and at least one user is selected.
- A random group ID is generated using
expo-crypto. - A new Stream channel of type
messagingis created with the selected users and the current user as members. - The app then navigates back to the chat list using
router.dismissTo('/chats').
- The
Building the Chat Screen
With our modal screens in place, we can start working on the chat screen. This is where users will see their messages, type replies, and initiate calls.
Firstly, create a ChannelTitle.tsx file in the components folder and add the following code:
import { Text } from 'react-native';
import { Channel } from 'stream-chat';
import { useChannelPreviewDisplayName } from 'stream-chat-expo';
interface ChannelTitleProps {
channel: Channel;
className?: string;
}
const ChannelTitle = ({
channel,
className = 'text-base font-bold',
}: ChannelTitleProps) => {
const channelName = useChannelPreviewDisplayName(channel);
return <Text className={className}>{channelName}</Text>;
};
export default ChannelTitle;
We’ll use this component to display the channel’s title in the chat screen.
Next, update the chat/[id].tsx file with the following code:
import Feather from '@expo/vector-icons/Feather';
import Ionicons from '@expo/vector-icons/Ionicons';
import { useLocalSearchParams, useRouter } from 'expo-router';
import { useEffect, useState } from 'react';
import { View } from 'react-native';
import { Channel as ChannelType } from 'stream-chat';
import {
Channel,
MessageInput,
MessageList,
useChatContext,
} from 'stream-chat-expo';
import Button from '@/components/Button';
import ChannelTitle from '@/components/ChannelTitle';
import PreviewAvatar from '@/components/PreviewAvatar';
import Screen from '@/components/Screen';
import ScreenLoading from '@/components/ScreenLoading';
const ChatScreen = () => {
const { id: channelId } = useLocalSearchParams<{ id: string }>();
const { client: chatClient } = useChatContext();
const router = useRouter();
const [channel, setChannel] = useState<ChannelType>();
const [loading, setLoading] = useState(true);
useEffect(() => {
const loadChannel = async () => {
const channel = chatClient.channel('messaging', channelId);
await channel.watch();
setChannel(channel);
setLoading(false);
};
if (chatClient && !channel) loadChannel();
}, [channelId, channel, chatClient]);
if (loading) {
return <ScreenLoading />;
}
return (
<Screen className="flex-1 bg-white" viewClassName="pb-safe">
<View className="pl-1 pr-4 pb-1 flex flex-row items-center justify-between w-full h-10">
<View className="flex flex-row items-center gap-4">
<Button variant="plain" onPress={() => router.back()}>
<Ionicons name="chevron-back" size={24} color="black" />
</Button>
<PreviewAvatar channel={channel!} size={28} fontSize={14} />
<ChannelTitle channel={channel!} />
</View>
<View className="flex flex-row items-center gap-6">
<Button variant="plain">
<Feather name="video" size={24} color="black" />
</Button>
<Button variant="plain">
<Feather name="phone" size={22} color="black" />
</Button>
</View>
</View>
<Channel
channel={channel!}
keyboardVerticalOffset={60}
keyboardBehavior="padding"
hasCommands={false}
reactionListPosition="bottom"
>
<MessageList />
<MessageInput />
</Channel>
</Screen>
);
};
export default ChatScreen;
In the code above:
We extract the channel ID from the route using
useLocalSearchParams.We initialize the Stream channel instance with
client.channel(...)and call thewatchmethod to fetch messages and events.While the channel is loading, we show a
ScreenLoadingcomponent.-
Once the channel is loaded, we render a header that:
- Contains a back button to return to the previous screen.
- Shows the chat avatar and title using
PreviewAvatarandChannelTitle. - Includes two plain buttons to initiate video or audio calls (though they’re just UI here for now)
-
For our messages, we render the following Stream components:
-
MessageList: Displays the list of chat messages. -
MessageInput: Shows the input field for sending messages. - These two components are wrapped inside the
Channelprovider, which manages the chat context.
-
Customizing the Channel UI
Now that we have our chat screen set up, let’s enhance its look and feel.
Enabling Audio Recording
We want to enable audio recording only when the input is empty and no images are selected. To do that, we’ll create a CustomMessageInput component.
Create a CustomMessageInput.tsx file in the components directory and add the following code:
import { TextComposerState } from 'stream-chat';
import {
MessageInput,
useAttachmentManagerState,
useMessageComposer,
useStateStore,
} from 'stream-chat-expo';
const textComposerStateSelector = (state: TextComposerState) => ({
text: state.text,
});
const CustomMessageInput = () => {
const { textComposer } = useMessageComposer();
const { text } = useStateStore(textComposer.state, textComposerStateSelector);
const { attachments } = useAttachmentManagerState();
const audioRecordingEnabled = !text && attachments.length === 0;
return <MessageInput audioRecordingEnabled={audioRecordingEnabled} />;
};
export default CustomMessageInput;
Here’s what’s happening:
We use the
useMessageComposerhook to access the current text input state.We also check for any selected attachments using
useAttachmentManagerState.If there’s no text and no attachments, we enable the audio recording option in the
MessageInputcomponent.
Now, update the chat screen to use this component instead of the default MessageInput:
...
import {
Channel,
MessageList,
useChatContext,
} from 'stream-chat-expo';
...
import CustomMessageInput from '@/components/CustomMessageInput';
...
const ChatScreen = () => {
...
return (
...
<Channel
...
>
<MessageList />
<CustomMessageInput />
</Channel>
...
);
};
export default ChatScreen;
Adding a Message List Header
Next, we want a custom header to always be visible in the chat, even if the conversation has no messages yet.
Create a MessageListHeader.tsx file in the components folder and add the following code:
import { MaterialIcons } from '@expo/vector-icons';
import { Text, View } from 'react-native';
import {
useChannelContext,
useChannelPreviewDisplayName,
} from 'stream-chat-expo';
import { checkIfDMChannel } from '../lib/utils';
import ChannelTitle from './ChannelTitle';
import PreviewAvatar from './PreviewAvatar';
const MessageListHeader = () => {
const { channel } = useChannelContext();
const channelName = useChannelPreviewDisplayName(channel);
const isDMChannel = checkIfDMChannel(channel);
const text = isDMChannel
? `This conversation is just between ${channelName} and you`
: 'This conversation is just between the members of this channel';
return (
<View className="items-center gap-3 mt-14 mb-8">
<PreviewAvatar channel={channel!} size={80} fontSize={40} />
<ChannelTitle channel={channel} className="text-2xl font-semibold" />
<View className="w-[280px] items-start justify-center inline-flex flex-row px-6 py-4 bg-white rounded-xl border-[2px] border-gray-100 shadow shadow-gray-100">
<MaterialIcons name="people-outline" size={18} color="black" />
<Text className="text-center">{text}</Text>
</View>
</View>
);
};
export default MessageListHeader;
To render this header in the chat screen, we’ll pass it to the Channel component’s EmptyStateIndicator prop and the MessageList’s FooterComponent prop.
Update the chat/[id].tsx file with the following code:
...
import MessageListHeader from '@/components/MessageListHeader';
...
const ChatScreen = () => {
...
return (
...
<Channel
...
EmptyStateIndicator={MessageListHeader}
>
<MessageList FooterComponent={MessageListHeader} />
<CustomMessageInput />
</Channel>
...
);
};
export default ChatScreen;
Custom Buttons
Next, let’s override the default attach and send buttons to better match our UI. To achieve this, we'll use the useMessageInputContext hook provided by Stream. This hook allows access to essential messaging actions, such as toggling the attachment picker or sending a message.
Create an AttachButton.tsx file in the components folder with the following code:
import { Feather } from '@expo/vector-icons';
import clsx from 'clsx';
import { AttachButtonProps, useMessageInputContext } from 'stream-chat-expo';
import Button from './Button';
const AttachButton = ({ disabled }: AttachButtonProps) => {
const { toggleAttachmentPicker, selectedPicker } = useMessageInputContext();
const isActive = selectedPicker === 'images';
return (
<Button
variant="plain"
disabled={disabled}
onPress={toggleAttachmentPicker}
className={clsx(
'p-0.5 rotate-[0deg]',
isActive && 'bg-gray-600 rounded-full rotate-[45deg]'
)}
>
<Feather name="plus" size={24} color={isActive ? 'white' : 'black'} />
</Button>
);
};
export default AttachButton;
Here we create a custom attach button that uses the toggleAttachmentPicker function from the useMessageInputContext hook to open and close the attachment selection UI. We also check whether the attachment picker is active using the selectedPicker state from the same hook.
Next, create a SendButton.tsx file in the components folder with the following snippet:
import { Feather } from '@expo/vector-icons';
import clsx from 'clsx';
import {
useAttachmentManagerState,
useMessageComposer,
useMessageInputContext,
useStateStore,
} from 'stream-chat-expo';
import { TextComposerState } from 'stream-chat';
import Button from './Button';
const textComposerStateSelector = (state: TextComposerState) => ({
text: state.text,
});
const SendButton = () => {
const { sendMessage } = useMessageInputContext();
const { textComposer } = useMessageComposer();
const { text } = useStateStore(textComposer.state, textComposerStateSelector);
const { attachments } = useAttachmentManagerState();
if (!text && attachments.length === 0) return null;
return (
<Button
variant="plain"
onPress={sendMessage}
className={clsx('p-0.5 bg-primary rounded-full')}
>
<Feather name="arrow-up" size={24} color="white" />
</Button>
);
};
export default SendButton;
Here, we extract text, attachments, and the sendMessage function from the their respective hooks.
We only show a send button if text or an attachment is ready to send. When the user clicks the button, we run the sendMessage function.
Finally, let’s register these new custom buttons within the Channel component to replace Stream’s default buttons:
...
import AttachButton from '@/components/AttachButton';
import SendButton from '@/components/SendButton';
...
const ChatScreen = () => {
...
return (
...
<Channel
...
AttachButton={AttachButton}
SendButton={SendButton}
>
...
</Channel>
...
);
};
export default ChatScreen;
Customizing the Message Avatar
Next, we’ll customize the avatar used in messages to maintain a consistent visual style across our app. To do this, we'll replace Stream’s default avatar component with our own.
Create a new file called MessageAvatar.tsx in the components folder and add this snippet:
import { View } from 'react-native';
import { useMessageContext, useTheme } from 'stream-chat-expo';
import Avatar from './Avatar';
const MessageAvatar = () => {
const { alignment, lastGroupMessage, message, showAvatar } =
useMessageContext();
const {
theme: {
messageSimple: {
avatarWrapper: { container, leftAlign, rightAlign, spacer },
},
},
} = useTheme();
const visible =
typeof showAvatar === 'boolean' ? showAvatar : lastGroupMessage;
if (!visible) return <View style={spacer} />;
return (
<View style={[alignment === 'left' ? leftAlign : rightAlign, container]}>
<Avatar
size={28}
name={message?.user?.name!}
fontSize={14}
imageUrl={message?.user?.image}
placeholderType="text"
/>
</View>
);
};
export default MessageAvatar;
Here’s what the code does:
Stream’s
useMessageContexthook gives us information about the current message, such as who sent it, whether to show the avatar, and where it’s aligned (left or right).The
useThemehook lets us access style values like spacing and alignment from the current theme.We check whether the avatar should be visible because
showAvataris explicitlytrue, or because it’s the last message in a group.If it’s visible, we render your custom
Avatarcomponent, positioning it based on whether the message is incoming or outgoing.
If the avatar isn’t needed, we return an empty spacer to keep the alignment consistent.
Next, update the chat/[id].tsx file to register the new message avatar:
...
import MessageAvatar from '@/components/MessageAvatar';
...
const ChatScreen = () => {
...
return (
...
<Channel
...
MessageAvatar={MessageAvatar}
>
...
</Channel>
...
);
};
export default ChatScreen;
Customizing Stream Components Using Themes
Most Stream components allow you to change their appearance using a built-in theming system. Stream uses a default theme that sets basic styles like colors, spacing, and fonts.
If you want to customize certain elements, you can provide a partial theme that only changes the specified elements. Anything you don't specify will use Stream's default styles.
For example, to make your messages stand out in the chat, you can create a partial theme that changes only the appearance of your messages, such as background and text colors.
In the chat screen (chat/[id].tsx), create a myMessageTheme object and apply it to the Channel component as shown below:
...
import {
...
DeepPartial,
Theme,
} from 'stream-chat-expo';
...
const myMessageTheme: DeepPartial<Theme> = {
messageSimple: {
content: {
senderMessageBackgroundColor: '#175dee',
markdown: {
text: {
color: 'white',
},
},
},
},
};
const ChatScreen = () => {
...
return (
...
<Channel
...
myMessageTheme={myMessageTheme}
>
...
</Channel>
...
);
};
export default ChatScreen;
In the code above, we define:
senderMessageBackgroundColor: Changes the background color of your sent messages.markdown.text.color: Changes the text color within your messages.
We can define a partial theme and pass it to the Chat provider for broader styling changes across the app, such as modifying the channel list UI or adjusting message spacing.
In the (home) folder, update the _layout.tsx file with the following code:
...
import {
Chat,
DeepPartial as ChatDeepPartial,
Theme as ChatTheme,
OverlayProvider,
} from 'stream-chat-expo';
...
const chatTheme: ChatDeepPartial<ChatTheme> = {
colors: {
white_snow: 'white',
},
channelPreview: {
container: {
borderBottomWidth: 0,
paddingLeft: 0,
},
title: {
fontWeight: '500',
},
unreadContainer: {
backgroundColor: '#2c6bed',
},
},
messageList: {
contentContainer: {
justifyContent: 'flex-end',
flexGrow: 1,
},
},
inlineDateSeparator: {
container: {
backgroundColor: 'transparent',
},
text: {
color: '#6B7280',
fontSize: 12,
fontWeight: '600',
},
},
messageSimple: {
content: {
receiverMessageBackgroundColor: '#e9e9e9',
textContainer: {
paddingHorizontal: 10,
},
},
},
messageInput: {
container: {
borderTopWidth: 0,
},
inputBoxContainer: {
backgroundColor: '#eeeeef',
borderRadius: 20,
paddingHorizontal: 0,
paddingVertical: 6,
borderColor: '#eeeeef',
},
audioRecordingButton: {
micIcon: {
fill: 'black',
width: 24,
height: 24,
style: {
marginHorizontal: 2,
},
},
},
},
};
const HomeLayout = () => {
...
return (
<OverlayProvider>
<Chat client={chatClient!} style={chatTheme}>
...
</Chat>
</OverlayProvider>
);
};
export default HomeLayout;
Here, we defined custom styles for several key components.
Adding Video and Audio Calling
Now that we have messaging in place, let’s add support for real-time video and audio calls using the Stream Video SDK.
Installing Stream Video SDK
Run the following command to install all the required packages:
npx expo install @stream-io/video-react-native-sdk \
@stream-io/react-native-webrtc \
@config-plugins/react-native-webrtc \
react-native-incall-manager \
react-native-svg \
@react-native-community/netinfo \
expo-build-properties
Next, update your app.json file to include the required plugins and settings:
{
"expo": {
...
"plugins": [
...
"expo-dev-client",
[
"expo-build-properties",
{
"android": {
"minSdkVersion": 24
}
}
],
[
"@stream-io/video-react-native-sdk",
{
"iOSEnableMultitaskingCameraAccess": true,
"androidPictureInPicture": true
}
],
[
"@config-plugins/react-native-webrtc",
{
"cameraPermission": "$(PRODUCT_NAME) requires camera access in order to capture and transmit video",
"microphonePermission": "$(PRODUCT_NAME) requires microphone access in order to capture and transmit audio"
}
]
],
...
}
}
Setting Up a Development Build
Expo Go doesn't support custom native modules that aren’t included by default. Since Stream Video SDK relies on native code (such as camera and microphone modules), we must create a custom development client. Here’s how to set this up:
-
Install the Expo Dev Client:
Run this command to enable your app to include custom native modules:
npx expo install expo-dev-client -
Install EAS CLI:
This CLI tool is used to build and configure development clients. Install it globally:
npm i -g eas-cli -
Log in to Expo:
Connect your project to your Expo account:
eas loginIf you don’t have an Expo account yet, you can create one on their website.
-
Initialize EAS in your project:
This creates an
eas.jsonfile for configuring builds:
eas init -
Configure native build settings:
Set up your app for both development and production builds:
eas build:configure -
Run the app on your device or simulator:
Finally, build and launch your app using the custom development client:
For iOS:
npx expo run:iosFor Android:
npx expo run:androidIf you run into any build errors, try clearing your
node_modulesand reinstalling:
rm -rf node_modules npm install npx expo run:ios # or run:androidThis usually resolves issues caused by mismatched dependencies or native module linking.
With our development build up and running, we can begin setting up the SDK.
Initializing the Stream Video Client
To make video calls work throughout the app, we must wrap our navigation stack in the StreamVideo provider.
Open the _layout.tsx file in the (home) directory, and update it with the following code:
...
import {
StreamVideo,
StreamVideoClient,
} from '@stream-io/video-react-native-sdk';
...
const HomeLayout = () => {
...
const [videoClient, setVideoClient] = useState<StreamVideoClient>();
useEffect(() => {
...
const setUpStream = async () => {
try {
...
const videoClient = StreamVideoClient.getOrCreateInstance({
apiKey: API_KEY,
user: chatUser,
tokenProvider: customProvider,
});
setVideoClient(videoClient);
} catch (error) {
console.error('Error setting up Stream:', error);
} finally {
setLoading(false);
}
};
if (user) setUpStream();
return () => {
if (!isSignedIn) {
chatClient?.disconnectUser();
videoClient?.disconnectUser();
}
};
}, [user, videoClient, chatClient, isSignedIn, router]);
if (loading) return <ScreenLoading />;
return (
<OverlayProvider>
<Chat client={chatClient!}>
<StreamVideo client={videoClient!}>
<Stack>
...
<Stack.Screen
name="call/[id]"
options={{
headerShown: false,
animation: 'none',
}}
/>
</Stack>
</StreamVideo>
</Chat>
</OverlayProvider>
);
};
export default HomeLayout;
In the code above, we:
Initialize the Stream Video client using the same user and token provider as the chat client.
Store the
videoClientin a local state and ensure it's only created after the user is loaded.Wrap our existing navigation stack in the
StreamVideoprovider, just like we did forChat.Add a route for
call/[id]inside the navigation stack.Ensure that both the chat client and video client are disconnected if the user is not signed in.
Adding Call Functionality to the Chat Screen
Next, let’s update the chat screen to allow users to initiate video or audio calls.
In the chat folder, open your [id].tsx file and update it with the following changes:
...
import { useCalls } from '@stream-io/video-react-native-sdk';
...
const ChatScreen = () => {
...
const [activeCall] = useCalls();
useEffect(() => {
...
}, [channelId, channel, chatClient]);
const startAudioCall = async () => {
router.navigate({
pathname: `/call/[id]`,
params: { id: channelId, updateCall: 'true' },
});
};
const startVideoCall = async () => {
router.navigate({
pathname: `/call/[id]`,
params: { id: channelId, updateCall: 'true', video: 'true' },
});
};
const callIsActive = !!activeCall && activeCall?.id !== channelId;
if (loading) {
...
}
return (
<Screen className="flex-1 bg-white" viewClassName="pb-safe">
<View className="pl-1 pr-4 pb-1 flex flex-row items-center justify-between w-full h-10">
...
<View className="flex flex-row items-center gap-6">
<Button
variant="plain"
onPress={startVideoCall}
disabled={callIsActive}
>
<Feather name="video" size={24} color="black" />
</Button>
<Button
variant="plain"
onPress={startAudioCall}
disabled={callIsActive}
>
<Feather name="phone" size={22} color="black" />
</Button>
</View>
</View>
...
</Screen>
);
};
export default ChatScreen;
In this update:
We use the
useCallshook to check for an ongoing call, which helps prevent users from joining or initiating multiple calls simultaneously.The
startAudioCallandstartVideoCallfunctions navigate the user to the/call/[id]route. This is where we'll later build the actual call screen.The
idparam represents the call ID. It’s used to identify which call corresponds to which chat.We pass
updateCall: 'true'so the user initiating the call can update its state and metadata as needed.If it’s a video call, we also pass
video: 'true'to make that intent clear in the call screen setup logic.The buttons are added to the top-right of the chat header and disabled if another call is already active (and not from this chat).
Building the Call Layout
Now, let’s define what happens when a call is started or received.
Create a call/[id] folder in the (home) directory, and then add a _layout.tsx file with the following code:
import {
Call,
MemberRequest,
StreamCall,
useStreamVideoClient,
} from '@stream-io/video-react-native-sdk';
import { Stack, useGlobalSearchParams, useRouter } from 'expo-router';
import { useEffect, useState } from 'react';
import { useChatContext } from 'stream-chat-expo';
import ScreenLoading from '@/components/ScreenLoading';
import { checkIfDMChannel } from '@/lib/utils';
const CallLayout = () => {
const { id, updateCall, video } = useGlobalSearchParams();
const router = useRouter();
const [call, setCall] = useState<Call>();
const videoClient = useStreamVideoClient();
const { client: chatClient } = useChatContext();
useEffect(() => {
const startCall = async () => {
const channel = chatClient.channel('messaging', id);
await channel.watch();
const _call = videoClient?.call('default', id as string);
const isDMChannel = checkIfDMChannel(channel);
const members = Object.values(channel?.state.members!).map<MemberRequest>(
(member) => ({
user_id: member.user?.id as string,
name: member.user?.name as string,
role: isDMChannel ? 'admin' : undefined,
})
);
const callConfig = {
custom: {
triggeredBy: chatClient.user?.id,
members,
},
settings_override: {
video: {
enabled: true,
camera_default_on: video === 'true',
target_resolution: {
width: 2560,
height: 1440,
},
},
},
};
await _call?.getOrCreate({
ring: true,
data: {
members,
...callConfig,
},
});
if (updateCall === 'true') {
await _call?.update(callConfig);
}
if (!isDMChannel && updateCall === 'true') {
try {
await _call?.join({ maxJoinRetries: 3 });
} catch {
router.back();
}
}
setCall(_call);
};
startCall();
// eslint-disable-next-line react-hooks/exhaustive-deps
}, []);
if (!call) {
return <ScreenLoading />;
}
return (
<StreamCall call={call}>
<Stack
screenOptions={{
headerShown: false,
}}
/>
</StreamCall>
);
};
export default CallLayout;
In this code above:
We grab the call ID,
updateCall, andvideoflags from the URL params.We use the
useStreamVideoClienthook to access the Stream Video client anduseChatContextto retrieve the current chat client instance.Initializes a video call using
videoClient.call(...).We map through the channel members and convert them to the format expected by Stream Video.
-
We configure the call with:
- Custom data:
triggeredByandmembers - Settings: We set up the video feature, like enabling it and setting the default camera state based on whether
video === 'true'.
- Custom data:
We use the
getOrCreatefunction to start the call and ring the participants.If
updateCall === 'true', it means the user is the one who initiated the call, so we update the call’s metadata.For group chats, if the user is initiating the call, we automatically attempt to join it. If joining fails, we navigate back to the previous screen.
Finally, we wrap the screen stack in the
StreamCallcomponent to provide the call context.
Configuring Call Permissions
To ensure users can update calls, we need to configure permissions in the Stream dashboard:
Navigate to the "Roles & Permissions" tab under "Video & Audio."
Select the "user" role and the "default" scope.
Click the “Edit” button and select the "Update Call" permission.
Save your changes.
Building the Call Screen
With our call layout and permissions set up, let’s move on to the actual call interface, which handles video and audio calls.
But before we dive into the call screen, we’ll need a small custom component to tweak the default video button styling.
Create a ToggleVideo.tsx file inside your components folder with the following code:
import { ToggleVideoPublishingButton } from '@stream-io/video-react-native-sdk';
import { View } from 'react-native';
const ToggleVideo = () => {
return (
<View className="w-12 h-12 rounded-full items-center justify-center bg-[#373737] p-4">
<ToggleVideoPublishingButton />
</View>
);
};
export default ToggleVideo;
Next, let’s create the actual call screen.
Create an index.tsx file in the call/[id] directory and add the following code:
import { useUser } from '@clerk/clerk-expo';
import {
CallContent,
CallingState,
DeepPartial,
HangUpCallButton,
IncomingCall,
OutgoingCall,
RingingCallContent,
StreamTheme,
Theme,
ToggleCameraFaceButton,
ToggleAudioPublishingButton as ToggleMic,
useCall,
useCallStateHooks,
} from '@stream-io/video-react-native-sdk';
import { useGlobalSearchParams, useRouter } from 'expo-router';
import { useEffect } from 'react';
import { View } from 'react-native';
import ToggleVideo from '@/components/ToggleVideo';
const svgContainerStyle = {
backgroundColor: '#373737',
width: 48,
height: 48,
borderRadius: 24,
};
// @ts-expect-error
const theme: DeepPartial<Theme> = {
colors: {
buttonSecondary: '#373737',
buttonWarning: '#373737',
},
joinCallButton: {
container: svgContainerStyle,
},
acceptCallButton: {
container: svgContainerStyle,
},
rejectCallButton: {
container: svgContainerStyle,
},
toggleAudioPublishingButton: {
svgContainer: svgContainerStyle,
},
toggleCameraFaceButton: {
svgContainer: svgContainerStyle,
},
hangupCallButton: {
svgContainer: {
...svgContainerStyle,
backgroundColor: '#eb5545',
},
},
participantVideoFallback: {
container: {
backgroundColor: '#1c1c1ecf',
},
},
};
const CallScreen = () => {
const { updateCall } = useGlobalSearchParams();
const { user } = useUser();
const call = useCall();
const router = useRouter();
const { useCallCallingState, useCallCustomData } = useCallStateHooks();
const callingState = useCallCallingState();
const customData = useCallCustomData();
const isCallTriggeredByMe =
customData.triggeredBy === user?.id || updateCall === 'true';
useEffect(() => {
if (callingState === CallingState.LEFT) {
router.back();
}
}, [callingState, router, call]);
if (
[CallingState.RINGING, CallingState.JOINING, CallingState.IDLE].includes(
callingState
)
) {
return (
<StreamTheme style={theme}>
<View className="flex-1 bg-black">
<View className="flex-1 bg-white">
{!isCallTriggeredByMe && <IncomingCall />}
{isCallTriggeredByMe && <OutgoingCall />}
</View>
</View>
</StreamTheme>
);
}
return (
<StreamTheme style={theme}>
<View className="flex-1 bg-black">
<View className="flex-1 pt-safe bg-white">
<RingingCallContent
CallContent={(props) => (
<CallContent
{...props}
layout="spotlight"
onHangupCallHandler={async () => {
await call?.endCall();
}}
CallControls={(props) => (
<View className="bg-[#1c1c1e] w-full h-[110px] pt-7 flex-row justify-center gap-4 rounded-t-2xl">
<ToggleCameraFaceButton />
<ToggleVideo />
<ToggleMic />
<HangUpCallButton
onHangupCallHandler={props.onHangupCallHandler}
/>
</View>
)}
/>
)}
/>
</View>
</View>
</StreamTheme>
);
};
export default CallScreen;
Let’s go over what’s happening in this screen:
We create a custom
themeto override styles for various Stream UI elements like buttons and fallback views.-
We use
useCallStateHooksto access the current call state and any custom metadata attached to the call.-
useCallCallingStatetells us what stage the call is in (ringing, idle, joining, connected, or left). -
useCallCustomDatalets us access any extra data added to the call, like who initiated it.
-
We check if the user triggered the call using the
triggeredByandupdateCallvalues. This way, we know whether to render theOutgoingCallorIncomingCallUI.If the call ends (
CallingState.LEFT), we navigate back to the previous screen usingrouter.back().If the call is still in progress but not yet accepted (ringing or joining), we show either
IncomingCallorOutgoingCalldepending on who started it.-
Once connected, we render
CallContentand pass in a custom control layout with buttons to:- Toggle camera face
- Toggle video
- Mute/unmute mic
- Hang up
Listening For Calls
The final step is to listen for incoming calls and automatically redirect the user to the /call/[id] screen.
Open your tabs/_layout.tsx file and add this logic:
...
import { useCalls } from '@stream-io/video-react-native-sdk';
import { Tabs, useRouter } from 'expo-router';
...
const TabsLayout = () => {
const router = useRouter();
const calls = useCalls().filter((call) => call.ringing);
const ringingCall = calls[0];
const isCallCreatedByMe =
ringingCall?.state?.custom.triggeredBy === ringingCall?.currentUserId;
useEffect(() => {
if (isCallCreatedByMe) return;
if (ringingCall) {
router.navigate({
pathname: `/call/[id]`,
params: {
id: ringingCall.id,
},
});
}
}, [ringingCall, isCallCreatedByMe, router]);
return (
...
);
};
export default TabsLayout;
Here’s what’s going on:
We use Stream’s
useCallshook to check for active calls in the app.We filter out only those in the
ringingstate.If the current user didn't trigger the call, we navigate them to the
/call/[id]screen so they can answer it.
And that’s a wrap!
Conclusion
In this two-part series, we’ve built a complete Signal clone using React Native and Stream's Chat and Video SDKs. We implemented real-time messaging, audio and video calling, and added essential features like profile editing and group chat creation.
While this tutorial has a strong foundation, there’s still room to take things further. For instance, you could:
Add push notifications for incoming messages and calls.
Build custom in-app stories or message reactions.
Build out a complete calls screen to show the user’s recent calls.
Stream’s SDKs are packed with powerful features, so don’t hesitate to dive into their documentation and keep experimenting.
You can also check out the GitHub repo for this project and explore the code.
Happy building!














Top comments (0)