Custom audio source
The default audio module of Video SDK meets the need of using basic audio functions in your app. For adding advanced audio functions, Video SDK supports using custom audio sources and custom audio rendering modules.
Video SDK uses the basic audio module on the device your app runs on by default. However, there are certain scenarios where you want to integrate a custom audio source into your app, such as:
- Your app has its own audio module.
- You need to process the captured audio with a pre-processing library for audio enhancement.
- You need flexible device resource allocation to avoid conflicts with other services.
This page shows you how to capture and render audio from custom sources.
Understand the tech
To set an external audio source, you configure the Agora Engine before joining a channel. To manage the capture and processing of audio frames, you use methods from outside the Video SDK that are specific to your custom source. Video SDK enables you to push processed audio data to the subscribers in a channel.
Custom audio capture
The following figure illustrates the process of custom audio capture.
- You implement the capture module using external methods provided by the SDK.
- You call
pushExternalAudioFrame
to send the captured audio frames to the SDK.
Custom audio rendering
The following figure illustrates the process of custom audio rendering.
- You implement the rendering module using external methods provided by the SDK.
- You call
pullPlaybackAudioFrame
to retrieve the audio data sent by remote users.
Prerequisites
Ensure that you have implemented the SDK quickstart in your project.
Implementation
This section shows you how to implement custom audio capture and render audio from a custom source.
Custom audio capture
Refer to the following call sequence diagram to implement custom audio capture in your app:
Follow these steps to implement custom audio capture in your project:
-
After initializing
RtcEngine
, callcreateCustomAudioTrack
to create a custom audio track and obtain the audio track ID. -
Call
joinChannel
to join the channel. InChannelMediaOptions
, setpublishCustomAudioTrackId
to the audio track ID obtained in step 1, and setpublishCustomAudioTrack
totrue
to publish the custom audio track.InformationTo use
enableCustomAudioLocalPlayback
for local playback of an external audio source, or to adjust the volume of a custom audio track withadjustCustomAudioPlayoutVolume
, setenableAudioRecordingOrPlayout
totrue
inChannelMediaOptions
. -
Agora provides the AudioFileReader.java sample to demonstrate how to read and publish PCM-format audio data from a local file. In a production environment, you create a custom audio acquisition module based on your business needs.
-
Call
pushExternalAudioFrame
to send the captured audio frame to the SDK through the custom audio track. Ensure that thetrackId
matches the audio track ID you obtained by callingcreateCustomAudioTrack
. SetsampleRate
,channels
, andbytesPerSample
to define the sampling rate, number of channels, and bytes per sample of the external audio frame.InformationFor audio and video synchronization, Agora recommends calling
getCurrentMonotonicTimeInMs
to get the system’s current monotonic time and setting thetimestamp
accordingly. -
To stop publishing custom audio, call
destroyCustomAudioTrack
to destroy the custom audio track.
Custom audio rendering
This section shows you how to implement custom audio rendering. Refer to the following call sequence diagram to implement custom audio rendering in your app:
To implement custom audio rendering, use the following methods:
-
Before calling
joinChannel
, usesetExternalAudioSink
to enable and configure custom audio rendering. -
After joining the channel, call
pullPlaybackAudioFrame
to get audio data sent by remote users. Use your own audio renderer to process the audio data and then play the rendered data.
Using raw audio data callback
This section explains how to implement custom audio rendering.
To retrieve audio data for playback, implement collection and processing of raw audio data. Refer to Raw audio processing.
Follow these steps to call the raw audio data API in your project for custom audio rendering:
-
Retrieve audio data for playback using the
onRecordAudioFrame
,onPlaybackAudioFrame
,onMixedAudioFrame
, oronPlaybackAudioFrameBeforeMixing
callback. -
Independently render and play the audio data.
Reference
This section explains how to implement different sound effects and audio mixing in your app, covering essential steps and code snippets.
Sample projects
Agora provides the following open-source sample projects for audio self-capture and audio self-rendering for your reference: