Skip to main content

You are viewing Agora Docs forBetaproducts and features. Switch to Docs

Android
iOS
macOS
Web
Windows
Electron
Flutter
React Native
React JS
Unity
Unreal Engine
Unreal (Blueprint)

Raw audio processing

In some scenarios, raw audio captured through the microphone must be processed to achieve the desired functionality or to enhance the user experience. Video SDK enables you to pre-process and post-process the captured audio for implementation of custom playback effects.

This article shows you how to pre-process and post-process collected raw audio data.

Understand the tech

For scenarios that require self-processing of audio data, Agora Video SDK provides raw data processing functionality. You can perform pre-processing to modify the captured audio signal before sending the data to the encoder, or post-process data to modify the received audio signal after sending the data to the decoder.

To implement processing of raw audio data in your app, take the following steps.

  • Register an instance of the audio frame observer before joining a channel.
  • Set the format of audio frames captured by each callback.
  • Implement callbacks in the frame observers to process raw audio data.
  • Unregister the frame observers before you leave a channel.

The following figure shows the basic processing of raw audio data:

Raw Audio Processing

Prerequisites

Ensure that you have implemented the SDK quickstart in your project.

Implement raw audio processing

Follow these steps to implement raw audio data processing functionality in your app:

  1. Before joining a channel, create an IAudioFrameObserver instance and call the registerAudioFrameObserver method to register the audio observer.
  2. Call setRecordingAudioFrameParameters, setPlaybackAudioFrameParameters, and setMixedAudioFrameParameters to configure the audio frame format.
  3. Implement onRecordAudioFrame, onPlaybackAudioFrame, onPlaybackAudioFrameBeforeMixing, and onMixedAudioFrame callbacks. These callbacks receive and process audio frames. If the return value of these callbacks is false, it indicates that the processing of the audio frames is invalid.

Refer to the following sample code to implement this logic:


_34
// Call registerAudioFrameObserver to register an audio observer and pass in an IAudioFrameObserver instance
_34
engine.registerAudioFrameObserver(new IAudioFrameObserver() {
_34
// Implement the onRecordAudioFrame callback
_34
@Override
_34
public boolean onRecordAudioFrame(byte[] samples, int numOfSamples, int bytesPerSample, int channels, int samplesPerSec) {
_34
if(isEnableLoopBack){
_34
mAudioPlayer.play(samples, 0, numOfSamples * bytesPerSample);
_34
}
_34
_34
return false;
_34
}
_34
_34
// Implement the onPlaybackAudioFrame callback
_34
@Override
_34
public boolean onPlaybackAudioFrame(byte[] samples, int numOfSamples, int bytesPerSample, int channels, int samplesPerSec) {
_34
return false;
_34
}
_34
_34
// Implement the onPlaybackAudioFrameBeforeMixing callback
_34
@Override
_34
public boolean onPlaybackAudioFrameBeforeMixing(byte[] samples, int numOfSamples, int bytesPerSample, int channels, int samplesPerSec, int uid) {
_34
return false;
_34
}
_34
_34
// Implement the onMixedAudioFrame callback
_34
@Override
_34
public boolean onMixedAudioFrame(byte[] samples, int numOfSamples, int bytesPerSample, int channels, int samplesPerSec) {
_34
return false;
_34
}
_34
_34
// Call methods with 'set' prefix to configure the audio frames captured by each callback
_34
engine.setRecordingAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL, Constants.RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, SAMPLES_PER_CALL);
_34
engine.setMixedAudioFrameParameters(SAMPLE_RATE, SAMPLES_PER_CALL);
_34
engine.setPlaybackAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL, Constants.RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, SAMPLES_PER_CALL);

Precaution

Video SDK uses a synchronous callback mechanism for processing raw audio data. When you save or rewrite data using the callbacks, consider the following best practices:

  • To ensure continuity of the audio stream, do not block the SDK thread by processing data directly in the callback function. Instead, make a deep copy of the received audio data and transfer the copied data to another thread for processing.

  • If you choose to process the audio data synchronously within the callback function, you must strictly control the processing time. For example, if the callback function is triggered every 10 milliseconds, then the processing time within the callback must be less than 10 milliseconds to prevent delays or interruptions in the audio stream.

Reference

This section contains content that completes the information on this page, or points you to documentation that explains other aspects to this product.

Video Calling