Using CSCore for VoiceRecorder Application
CSCore is a powerful .NET library for audio processing and recording, offering a wide range of functionalities to work with sound. It supports multiple audio formats and devices, making it a go-to choice for developers working on audio-related applications. In this article I'll show you how to use CSCore using my simple VoiceRecorder application as an example, available on GitHub and SourceForge. I will focus on the core audio functionalities provided by CSCore and how they are integrated into my application. Note that we will not cover Avalonia UI or the MVVM design pattern.
Introduction to CSCore
CSCore offers a variety of features that make it an ideal choice for developers working on projects that require audio input and output. Some of the key features of CSCore include:
Support for multiple audio formats:
CSCore can handle various audio formats.Audio capture and playback:
The library provides robust methods for capturing audio from different devices and playing back audio files.Audio processing:
CSCore supports audio filtering, effects, and mixing, making it suitable for complex audio processing tasks.Device management:
The library allows easy enumeration and selection of audio devices.
Key Features of VoiceRecorder
VoiceRecorder is an easy-to-use application designed to record audio from selected devices. Itβs good for quick recordings and audio testing. The application simplifies the process of selecting an audio device, starting and stopping recordings, and saving the audio to a file. VoiceRecorder allows you to choose from three available filters to enhance your audio or record without any filters if you prefer.
Core Classes and Functionality
AudioDevice Class
The AudioDevice class is responsible for managing the audio devices available on the system. It uses the MMDeviceEnumerator from CSCore's CoreAudioAPI to list and select audio capture devices.
public sealed class AudioDevice : IDisposable
{
// Enumerates multimedia devices, allowing us to list
private MMDeviceEnumerator _mmdeviceEnumerator;
private bool _disposed = false;
public AudioDevice()
{
_mmdeviceEnumerator = new MMDeviceEnumerator();
}
// Retrieves a list of available audio capture devices
public List<string> GetAvailableDevices()
{
return _mmdeviceEnumerator.EnumAudioEndpoints(DataFlow.Capture, DeviceState.Active)
.Select(device => device.FriendlyName) // Gets the friendly name of each device
.ToList();
}
// Selects an audio device by its friendly name
public MMDevice SelectDevice(string deviceName)
{
return _mmdeviceEnumerator.EnumAudioEndpoints(DataFlow.Capture, DeviceState.Active)
.FirstOrDefault(device => device.FriendlyName == deviceName);
}
// Implements the Dispose pattern to release unmanaged resources
private void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
if (_mmdeviceEnumerator != null)
{
_mmdeviceEnumerator.Dispose();
_mmdeviceEnumerator = null;
}
}
_disposed = true;
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
}
AudioRecorder Class
The AudioRecorder class handles the core recording functionality. It uses WasapiCapture for capturing audio and WaveWriter for writing the captured audio to a WAV file. The class also supports applying audio filters during recording.
public sealed class AudioRecorder : IDisposable
{
private WasapiCapture _capture; // Captures audio from the selected device
private WaveWriter _writer; // Writes captured audio to a WAV file
private bool _disposed = false;
private SoundInSource _soundInSource; // Source for audio data from the capture device
public IWaveSource CaptureSource => _soundInSource; // Exposes the current audio source
// Starts recording audio from the specified device to a WAV file
public void StartRecording(string outputFilePath, MMDevice device, IAudioFilter filter)
{
try
{
// Initialize audio capture with the selected device
_capture = new WasapiCapture();
_capture.Device = device;
_capture.Initialize();
// Create a SoundInSource to handle audio data
_soundInSource = new SoundInSource(_capture) { FillWithZeros = false };
// Apply filter if provided, otherwise use the raw source
IWaveSource filteredSource;
if (filter != null)
{
filteredSource = filter.ApplyFilter((IWaveSource)_soundInSource);
}
else
{
filteredSource = _soundInSource;
}
// Initialize WaveWriter to save the audio to a file
_writer = new WaveWriter(outputFilePath, filteredSource.WaveFormat);
// Buffer to hold audio data
byte[] buffer = new byte[filteredSource.WaveFormat.BytesPerSecond / 2];
// Event handler for when audio data is available
_capture.DataAvailable += (s, e) =>
{
int read;
while ((read = filteredSource.Read(buffer, 0, buffer.Length)) > 0)
{
_writer.Write(buffer, 0, read);
}
};
// Start audio capture
_capture.Start();
}
catch (Exception ex)
{
Console.WriteLine($"An error occurred: {ex.Message}");
}
}
// Stops the recording process and releases resources
public void StopRecording()
{
try
{
_capture.Stop();
_writer.Dispose();
}
catch (Exception ex)
{
Console.WriteLine($"An error occurred: {ex.Message}");
}
}
// Updates the audio source, allowing for dynamic changes during recording
public void UpdateSource(IWaveSource newSource)
{
_capture.Stop();
_soundInSource = newSource as SoundInSource;
if (_soundInSource != null)
{
_capture.Start();
}
else
{
Console.WriteLine("newSource is not a SoundInSource");
}
}
// Implements the Dispose pattern to release unmanaged resources
private void Dispose(bool disposing)
{
if (!_disposed)
{
if (disposing)
{
if (_capture != null)
{
_capture.Dispose();
_capture = null;
}
if (_writer != null)
{
_writer.Dispose();
_writer = null;
}
}
_disposed = true;
}
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
}
Key Moments of MainWindowViewModel
The MainWindowViewModel class manages the interaction between the user interface and the main recording functionality. It maintains application state, processes user commands, and coordinates the recording process. However, in this article I want to focus only on the StartRecording method
public class MainWindowViewModel : ViewModelBase
{
...
public void StartRecording(string deviceName, VoiceFilterViewModel filterViewModel)
{
// Generate a unique file path for the recording based on the device name
string filePath = AudioFilePathHelper.GenerateAudioFilePath(deviceName);
// Select the audio device that matches the provided device name
var device = Device.SelectDevice(deviceName);
// Check if a filter is provided and start recording with or without the filter
if (filterViewModel != null && filterViewModel.FilterStrategy != null)
{
// Start recording with the provided filter strategy
Recorder.StartRecording(filePath, device, filterViewModel.FilterStrategy);
}
else
{
// Start recording without any filter
Recorder.StartRecording(filePath, device, null);
}
// Set the recording state to true to indicate that recording has started
IsRecording = true;
// If filters are applied and a filter is selected, apply the filter command
if (IsFilterApplied && SelectedFilterViewModel != null)
{
ApplyFilterCommand();
}
}
The StartRecording method in the MainWindowViewModel class initializes the recording process by selecting the appropriate audio device, determining whether to apply an audio filter, and starting the recording. It also updates the recording state and applies any selected filter if applicable.
Conclusion
In this article I did not touch on many topics: for example, working with filters and View classes. However, the purpose of this article was to show a clear example of using the CSCore library to capture and process audio. Using CSCore's robust features, we can create a simple yet powerful audio recording application. The focus on core classes and functions makes the application efficient and easy to understand, making it a suitable starting point for more complex audio processing projects. Happy coding!
Top comments (0)