DEV Community

Shilleh
Shilleh

Posted on • Edited on

How to Save and Play Audio in React Native Expo App


Are you interested in adding audio recording and playback functionality to your React Native Expo app? With the rise of audio-based applications and the popularity of podcasts, adding audio capabilities to your app can enhance the user experience and provide new opportunities for engagement. In this tutorial, we will guide you through the process of recording and playing audio in a React Native Expo app, step-by-step. Whether you're building a language learning app, a music player, or a podcast platform, this tutorial will provide you with the skills you need to add audio functionality to your app. So let's get started!

Do not forget to like, comment, and subscribe to the channel before getting into it!

Step 1-) Initialize an Expo App
Make sure you have Node.js and npm installed on your machine. You can download them from the official website: https://nodejs.org/en/download/.

Open your terminal or command prompt and run the following command to install the Expo CLI globally:

npm install -g expo-cli

Once the installation is complete, navigate to the directory where you want to create your app and run the following command:

expo init my-new-app

Replace my-new-app with the name of your app. This command will create a new directory with the same name as your app and initialize a new Expo project inside it.

Choose a template for your app from the list of available options. You can select a blank template or choose from one of the preconfigured templates that include common features such as navigation, authentication, and more.

Once you've chosen a template, Expo will install all the necessary dependencies and set up your app. This may take a few minutes, depending on your internet connection.

Step 2-) Add the Following Code to your Component:

import { Text, TouchableOpacity, View, StyleSheet } from 'react-native';
import React, { useState, useEffect } from 'react';
import { Audio } from 'expo-av';
import * as FileSystem from 'expo-file-system';
import { FontAwesome } from '@expo/vector-icons';

export default function App() {

  const [recording, setRecording] = useState(null);
  const [recordingStatus, setRecordingStatus] = useState('idle');
  const [audioPermission, setAudioPermission] = useState(null);

  useEffect(() => {

    // Simply get recording permission upon first render
    async function getPermission() {
      await Audio.requestPermissionsAsync().then((permission) => {
        console.log('Permission Granted: ' + permission.granted);
        setAudioPermission(permission.granted)
      }).catch(error => {
        console.log(error);
      });
    }

    // Call function to get permission
    getPermission()
    // Cleanup upon first render
    return () => {
      if (recording) {
        stopRecording();
      }
    };
  }, []);

  async function startRecording() {
    try {
      // needed for IoS
      if (audioPermission) {
        await Audio.setAudioModeAsync({
          allowsRecordingIOS: true,
          playsInSilentModeIOS: true
        })
      }

      const newRecording = new Audio.Recording();
      console.log('Starting Recording')
      await newRecording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY);
      await newRecording.startAsync();
      setRecording(newRecording);
      setRecordingStatus('recording');

    } catch (error) {
      console.error('Failed to start recording', error);
    }
  }

  async function stopRecording() {
    try {

      if (recordingStatus === 'recording') {
        console.log('Stopping Recording')
        await recording.stopAndUnloadAsync();
        const recordingUri = recording.getURI();

        // Create a file name for the recording
        const fileName = `recording-${Date.now()}.caf`;

        // Move the recording to the new directory with the new file name
        await FileSystem.makeDirectoryAsync(FileSystem.documentDirectory + 'recordings/', { intermediates: true });
        await FileSystem.moveAsync({
          from: recordingUri,
          to: FileSystem.documentDirectory + 'recordings/' + `${fileName}`
        });

        // This is for simply playing the sound back
        const playbackObject = new Audio.Sound();
        await playbackObject.loadAsync({ uri: FileSystem.documentDirectory + 'recordings/' + `${fileName}` });
        await playbackObject.playAsync();

        // resert our states to record again
        setRecording(null);
        setRecordingStatus('stopped');
      }

    } catch (error) {
      console.error('Failed to stop recording', error);
    }
  }

  async function handleRecordButtonPress() {
    if (recording) {
      const audioUri = await stopRecording(recording);
      if (audioUri) {
        console.log('Saved audio file to', savedUri);
      }
    } else {
      await startRecording();
    }
  }

  return (
    <View style={styles.container}>
      <TouchableOpacity style={styles.button} onPress={handleRecordButtonPress}>
        <FontAwesome name={recording ? 'stop-circle' : 'circle'} size={64} color="white" />
      </TouchableOpacity>
      <Text style={styles.recordingStatusText}>{`Recording status: ${recordingStatus}`}</Text>
    </View>
  );
}

const styles = StyleSheet.create({
  container: {
    flex: 1,
    alignItems: 'center',
    justifyContent: 'center',
  },
  button: {
    alignItems: 'center',
    justifyContent: 'center',
    width: 128,
    height: 128,
    borderRadius: 64,
    backgroundColor: 'red',
  },
  recordingStatusText: {
    marginTop: 16,
  },
});
Enter fullscreen mode Exit fullscreen mode

In the useEffect we simply ensure we have recording permission from the user, we use the Audio library to do that. We also clean up any existing recordings in the return function of the useEffect.

startRecording()

We use this function to start getting Audio from the user.

We need setAudioModeAsync() to be able to record on IOS

We initialize an Audio object, prepare, and begin recording all within this function

stopRecording()

This function is used to simply stop, save, and playback the recording to the user

We use the FileSystem library to move the recording URI to the filesystem, and we initialize a Playback Object to play the audio itself

handleRecordButtonPress()

This function simply starts or stops a recording based on the state of a recording.

The rest of the App.js file is the html and styling, which you can copy or create your own style!

**Note that the expo library can be buggy with the simulator, so sometimes you may need to close and reopen it for it to work. Make sure you turn up the volume as well in the simulator.

Conclusion:
Be sure to follow the channel if you found this content useful. Let me know if you have any questions down below. Thanks!

Top comments (2)

Collapse
 
yassine1982 profile image
RIAHI Yassine

Hi thank you for this tutorial... it's very helpful...
Have you any idea about audio processing using react native.. specially when we want to transform an waveform to FFT...

Thanks

Collapse
 
shilleh profile image
Shilleh

Just getting into it buddy can't say I have done any advanced things with audio yet. Was trying to do some speech translation but other than that not too much. Goodluck!