Custom audio source
The default audio module of Video SDK meets the need of using basic audio functions in your app. For adding advanced audio functions, Video SDK supports using custom audio sources and custom audio rendering modules.
Video SDK uses the basic audio module on the device your app runs on by default. However, there are certain scenarios where you want to integrate a custom audio source into your app, such as:
- Your app has its own audio module.
- You need to process the captured audio with a pre-processing library for audio enhancement.
- You need flexible device resource allocation to avoid conflicts with other services.
This page shows you how to capture and render audio from custom sources.
Understand the tech
To set an external audio source, you configure the Agora Engine before joining a channel. To manage the capture and processing of audio frames, you use methods from outside the Video SDK that are specific to your custom source. Video SDK enables you to push processed audio data to the subscribers in a channel.
Custom audio capture
The following figure illustrates the process of custom audio capture.
- You implement the capture module using external methods provided by the SDK.
- You call
pushExternalAudioFrame
to send the captured audio frames to the SDK.
Custom audio rendering
The following figure illustrates the process of custom audio rendering.
- You implement the rendering module using external methods provided by the SDK.
- You call
pullPlaybackAudioFrame
to retrieve the audio data sent by remote users.
Prerequisites
Ensure that you have implemented the SDK quickstart in your project.
Implementation
This section shows you how to implement custom audio capture and render audio from a custom source.
Custom audio capture
Refer to the following call sequence diagram to implement custom audio capture in your app:
Follow these steps to implement custom audio capture in your project:
-
Before calling
joinChannel
, usesetExternalAudioSource
to enable and configure your custom audio source. -
Implement audio capture and processing using external SDK methods.
-
Call
pushExternalAudioFrame
to send audio frames to the SDK.
Custom audio rendering
This section shows you how to implement custom audio rendering. Refer to the following call sequence diagram to implement custom audio rendering in your app:
To implement custom audio rendering, use the following methods:
-
Before calling
joinChannel
, usesetExternalAudioSink
to enable and configure custom audio rendering. -
After joining the channel, call
pullPlaybackAudioFrame
to get audio data sent by remote users. Use your own audio renderer to process the audio data and then play the rendered data.
Using raw audio data callback
This section explains how to implement custom audio rendering.
To retrieve audio data for playback, implement collection and processing of raw audio data. Refer to Raw audio processing.
Follow these steps to call the raw audio data API in your project for custom audio rendering:
-
Retrieve audio data for playback using the
onRecordAudioFrame
,onPlaybackAudioFrame
,onMixedAudioFrame
, oronPlaybackAudioFrameBeforeMixing
callback. -
Independently render and play the audio data.
Reference
This section explains how to implement different sound effects and audio mixing in your app, covering essential steps and code snippets.
Sample projects
Agora provides the following open-source sample projects for audio self-capture and audio self-rendering for your reference: