Skip to main content

You are viewing Agora Docs forBetaproducts and features. Switch to Docs

You are looking at Interactive Live Streaming v3.x Docs. The newest version is  Interactive Live Streaming 4.x

Android
iOS
macOS
Windows C++
Windows C#
Unity
Flutter
React Native
Electron
Cocos Creator
Cocos2d-x

Raw Audio Data

Introduction

During the audio transmission process, you can pre- and post-process the captured audio data to achieve the desired playback effect.

Agora provides the raw data function for you to process the audio data according to your scenarios. This function enables you to pre-process the captured audio signal before sending it to the encoder, or to post-process the decoded audio signal.

Sample project

Agora provides an open-source sample project that implements processing raw audio data using Java APIs on GitHub. You can try the demo and view the source code.

Implementation

Before using the raw data functions, ensure that you have implemented the basic real-time audio functions in your project.

Process raw audio data using Java APIs

To call Java APIs in your project to implement the raw audio data functions, do the following:

  1. Before joining a channel, create an IAudioFrameObserver object and then call registerAudioFrameObserver to register an audio frame observer.
  2. After you successfully register the audio frame observer, the SDK triggers the getRecordAudioParams, getPlaybackAudioParams, or getMixedAudioParams callbacks. You can set the desired audio data format in the return values of these callbacks.
  3. After you join the channel, the SDK triggers the getObservedAudioFramePosition and isMultipleChannelFrameWanted callbacks when capturing each audio frame. In the return values of these callbacks, you can set the audio observation positions and whether to receive raw audio data from multiple channels.
  4. According to the return values of getObservedAudioFramePosition and isMultipleChannelFrameWanted, the SDK triggers the onRecordFrame, onPlaybackFrame, onPlaybackFrameBeforeMixing/onPlaybackFrameBeforeMixingEx, or onMixedFrame callbacks to send you the captured raw audio data.
  5. Process the captured audio data according to your scenarios. You can send the processed audio data with the onRecordFrame, onPlaybackFrame, onPlaybackFrameBeforeMixing/onPlaybackFrameBeforeMixingEx or onMixedFrame callbacks according to your scenarios.

API call sequence

The following diagram shows how to implement the raw audio data function in your project:

1635734726061

Sample code


_57
// Define the readBuffer method to read the audio buffer of the local audio file.
_57
private byte[] readBuffer(){
_57
int byteSize = SAMPLES_PER_CALL * BIT_PER_SAMPLE / 8;
_57
byte[] buffer = new byte[byteSize];
_57
try {
_57
if(inputStream.read(buffer) < 0){
_57
inputStream.reset();
_57
return readBuffer();
_57
}
_57
} catch (IOException e) {
_57
e.printStackTrace();
_57
}
_57
return buffer;
_57
}
_57
_57
// Define the audioAggregate method to mix the audio data from the onRecordFrame callback with the audio buffer of the local audio file.
_57
private byte[] audioAggregate(byte[] origin, byte[] buffer) {
_57
byte[] output = new byte[origin.length];
_57
for (int i = 0; i < origin.length; i++) {
_57
output[i] = (byte) ((int) origin[i] + (int) buffer[i] / 2);
_57
}
_57
return output;
_57
}
_57
_57
// Implement an IAudioFrameObserver class.
_57
private final IAudioFrameObserver audioFrameObserver = new IAudioFrameObserver() {
_57
_57
// Implement the getObservedAudioFramePosition callback. Set the audio observation position as POSITION_RECORD in the return value of this callback, which enables the SDK to trigger the onRecordFrame callback.
_57
@Override
_57
public int getObservedAudioFramePosition() {
_57
return IAudioFrameObserver.POSITION_RECORD;
_57
}
_57
_57
// Implement the getRecordAudioParams callback. Set the audio recording format in the return value of this callback for the onRecordFrame callback.
_57
@Override
_57
public AudioParams getRecordAudioParams() {
_57
return new AudioParams(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL, Constants.RAW_AUDIO_FRAME_OP_MODE_READ_WRITE, SAMPLES_PER_CALL);
_57
}
_57
_57
// Implement the onRecordFrame callback, get audio data from the callback, and send the data to the SDK after mixing it with the local audio file.
_57
@Override
_57
public boolean onRecordFrame(AudioFrame audioFrame) {
_57
Log.i(TAG, "onRecordAudioFrame " + isWriteBackAudio);
_57
if(isWriteBackAudio){
_57
ByteBuffer byteBuffer = audioFrame.samples;
_57
byte[] buffer = readBuffer();
_57
byte[] origin = new byte[byteBuffer.remaining()];
_57
byteBuffer.get(origin);
_57
byteBuffer.flip();
_57
byteBuffer.put(audioAggregate(origin, buffer), 0, byteBuffer.remaining());
_57
}
_57
return true;
_57
}
_57
};
_57
_57
// Pass the IAudioFrameObserver and register the audio observer.
_57
engine.registerAudioFrameObserver(audioFrameObserver);

API reference

Process raw audio data using JNI and C++ APIs

Before using the raw data function, ensure that you have implemented the basic real-time audio function in your project.

The Agora C++ SDK provides the IAudioFrameObserver class to capture and modify raw audio data. Therefore, you can use Java to call the C++ API via the JNI (Java Native Interface). Since the Video SDK for Java encapsulates the Video SDK for C++, you can include the .h file in the SDK to directly call the C++ methods.

Follow these steps to implement the raw audio data function in your project:

  1. Use the JNI and C++ interface files to generate a shared library in the project, and use Java to call the raw audio data interface of the Agora C++ SDK.
  2. Before joining a channel, call the registerAudioFrameObserver method to register an audio observer, and implement an IAudioFrameObserver class in this method.
  3. After you successfully register the observer, the SDK sends the captured raw audio data via the onRecordAudioFrame, onPlaybackAudioFrame, onPlaybackAudioFrameBeforeMixing, or onMixedAudioFrame callbacks.
  4. Process the captured raw audio data according to your needs. Then, you can either play it yourself directly or send it to the SDK via the callbacks mentioned in step 3 per your requirements.

Call the Agora C++ API in a Java project

The following diagram shows the basic flow of calling the Agora C++ API in a Java project:

1607913364400

  • The Java project loads the .so library built from the C++ interface file (.cpp file) via the Java interface file.
  • The Java interface file generates a .h file with the javac -h -jni command. The C++ interface file should include this file.
  • The C++ interface file calls the C++ method of the .so library in the Agora Android SDK by including the header files from the Agora Android SDK.

API call sequence

The following diagram shows how to implement the raw audio data function in your project:

The registerAudioFrameObserver, onRecordAudioFrame, onPlaybackAudioFrame, onMixedAudioFrame, and onPlaybackAudioFrameBeforeMixing are all C++ methods and callbacks.

1607913459953

Sample code

Create a JNI interface

Create a Java interface file and a C++ interface file separately via the JNI interface. Make sure to build the C++ interface file as a .so library.

  1. Create a Java interface file to call the C++ API. The interface file should declare the relevant Java methods for calling C++. Refer to the MediaPreProcessing.java file in the sample project for the implementation.

_40
// The Java interface file declares the relevant Java methods for calling C++.
_40
package io.agora.advancedvideo.rawdata;
_40
_40
import java.nio.ByteBuffer;
_40
_40
public class MediaPreProcessing {
_40
static {
_40
// Loads the C++ .so library. Build the C++ interface file to generate the .so library.
_40
// The name of the .so library depends on the library name generated by building the C++ interface file.
_40
System.loadLibrary("apm-plugin-raw-data");
_40
}
_40
// Define the local method
_40
public interface ProgressCallback {
_40
_40
...
_40
_40
// Get the recorded audio frame
_40
void onRecordAudioFrame(int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength);
_40
// Get the playback audio frame
_40
void onPlaybackAudioFrame(int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength);
_40
// Get the playback audio frame before mixing
_40
void onPlaybackAudioFrameBeforeMixing(int uid, int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength);
_40
// Get the mixed audio frame
_40
void onMixedAudioFrame(int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength);
_40
}
_40
_40
public static native void setCallback(ProgressCallback callback);
_40
_40
public static native void setAudioRecordByteBuffer(ByteBuffer byteBuffer);
_40
_40
public static native void setAudioPlayByteBuffer(ByteBuffer byteBuffer);
_40
_40
public static native void setBeforeAudioMixByteBuffer(ByteBuffer byteBuffer);
_40
_40
public static native void setAudioMixByteBuffer(ByteBuffer byteBuffer);
_40
_40
_40
public static native void releasePoint();
_40
_40
}

  1. Run the following command to generate a .h file from the Java interface file:

_6
# JDK 10 or later
_6
javac -h -jni MediaPreProcessing.java
_6
_6
# JDK 9 or earlier
_6
javac MediaPreProcessing.java
_6
javah -jni MediaPreProcessing.class

  1. Create a C++ interface file. The C++ interface file exports the corresponding methods from the C++ SDK based on the generated .h file. Refer to the io_agora_advancedvideo_rawdata_MediaPreProcessing.cpp file in the sample project for the implementation.
The JNI defines the data structure in C++ that maps to Java. You can refer to this document for more information.

_237
// Global variables
_237
_237
jobject gCallBack = nullptr;
_237
jclass gCallbackClass = nullptr;
_237
// Method IDs at the Java level
_237
jmethodID recordAudioMethodId = nullptr;
_237
jmethodID playbackAudioMethodId = nullptr;
_237
jmethodID playBeforeMixAudioMethodId = nullptr;
_237
jmethodID mixAudioMethodId = nullptr;
_237
// ByteBuffer for audio frames from onRecordAudioFrame
_237
void *_javaDirectPlayBufferRecordAudio = nullptr;
_237
// ByteBuffer for audio frames from onPlaybackAudioFrame
_237
void *_javaDirectPlayBufferPlayAudio = nullptr;
_237
// ByteBuffer for audio frames from onPlaybackAudioFrameBeforeMixing
_237
void *_javaDirectPlayBufferBeforeMixAudio = nullptr;
_237
// ByteBuffer for audio frames from onMixedAudioFrame
_237
void *_javaDirectPlayBufferMixAudio = nullptr;
_237
map<int, void *> decodeBufferMap;
_237
_237
static JavaVM *gJVM = nullptr;
_237
_237
// Implement the IAudioFrameObserver class and related callbacks
_237
class AgoraAudioFrameObserver : public agora::media::IAudioFrameObserver
_237
{
_237
_237
public:
_237
AgoraAudioFrameObserver()
_237
{
_237
gCallBack = nullptr;
_237
}
_237
_237
~AgoraAudioFrameObserver()
_237
{
_237
}
_237
// Get audio frames from the AudioFrame object, copy to the ByteBuffer, and call the Java method by method ID
_237
void getAudioFrame(AudioFrame &audioFrame, _jmethodID *jmethodID, void *_byteBufferObject,
_237
unsigned int uid)
_237
{
_237
if (_byteBufferObject == nullptr)
_237
{
_237
return;
_237
}
_237
_237
AttachThreadScoped ats(gJVM);
_237
JNIEnv *env = ats.env();
_237
if (env == nullptr)
_237
{
_237
return;
_237
}
_237
int len = audioFrame.samples * audioFrame.bytesPerSample;
_237
memcpy(_byteBufferObject, audioFrame.buffer, (size_t) len); // * sizeof(int16_t)
_237
_237
if (uid == 0)
_237
{
_237
env->CallVoidMethod(gCallBack, jmethodID, audioFrame.type, audioFrame.samples,
_237
audioFrame.bytesPerSample,
_237
audioFrame.channels, audioFrame.samplesPerSec,
_237
audioFrame.renderTimeMs, len);
_237
} else
_237
{
_237
env->CallVoidMethod(gCallBack, jmethodID, uid, audioFrame.type, audioFrame.samples,
_237
audioFrame.bytesPerSample,
_237
audioFrame.channels, audioFrame.samplesPerSec,
_237
audioFrame.renderTimeMs, len);
_237
}
_237
}
_237
// Copies the audio frames from the ByteBuffer to the AudioFrame object
_237
void writebackAudioFrame(AudioFrame &audioFrame, void *byteBuffer)
_237
{
_237
if (byteBuffer == nullptr)
_237
{
_237
return;
_237
}
_237
_237
int len = audioFrame.samples * audioFrame.bytesPerSample;
_237
memcpy(audioFrame.buffer, byteBuffer, (size_t) len);
_237
}
_237
_237
public:
_237
// Implement the onRecordAudioFrame callback
_237
virtual bool onRecordAudioFrame(AudioFrame &audioFrame) override
_237
{
_237
// Gets the recorded audio frames
_237
getAudioFrame(audioFrame, recordAudioMethodId, _javaDirectPlayBufferRecordAudio, 0);
_237
// Sends the audio frames to the SDK
_237
writebackAudioFrame(audioFrame, _javaDirectPlayBufferRecordAudio);
_237
return true;
_237
}
_237
_237
// Implement the onPlaybackAudioFrame callback
_237
virtual bool onPlaybackAudioFrame(AudioFrame &audioFrame) override
_237
{
_237
// Gets the playback audio frames
_237
getAudioFrame(audioFrame, playbackAudioMethodId, _javaDirectPlayBufferPlayAudio, 0);
_237
// Sends the audio frames to the SDK
_237
writebackAudioFrame(audioFrame, _javaDirectPlayBufferPlayAudio);
_237
return true;
_237
}
_237
_237
// Implement the onPlaybackAudioFrameBeforeMixing callback
_237
virtual bool onPlaybackAudioFrameBeforeMixing(unsigned int uid, AudioFrame &audioFrame) override
_237
{
_237
// Gets the playback audio frames before mixing
_237
getAudioFrame(audioFrame, playBeforeMixAudioMethodId, _javaDirectPlayBufferBeforeMixAudio,
_237
uid);
_237
// Sends the audio frames to the SDK
_237
writebackAudioFrame(audioFrame, _javaDirectPlayBufferBeforeMixAudio);
_237
return true;
_237
}
_237
_237
// Implement the onMixedAudioFrame callback
_237
virtual bool onMixedAudioFrame(AudioFrame &audioFrame) override
_237
{
_237
// Gets the mixed audio frames
_237
getAudioFrame(audioFrame, mixAudioMethodId, _javaDirectPlayBufferMixAudio, 0);
_237
// Sends the audio frames to the SDK
_237
writebackAudioFrame(audioFrame, _javaDirectPlayBufferMixAudio);
_237
return true;
_237
}
_237
};
_237
_237
_237
...
_237
_237
// AgoraAudioFrameObserver object
_237
static AgoraAudioFrameObserver s_audioFrameObserver;
_237
// IRtcEngine object
_237
static agora::rtc::IRtcEngine *rtcEngine = nullptr;
_237
_237
// Set up the C++ interface
_237
#ifdef __cplusplus
_237
extern "C" {
_237
#endif
_237
_237
_237
int __attribute__((visibility("default")))
_237
loadAgoraRtcEnginePlugin(agora::rtc::IRtcEngine *engine)
_237
{
_237
__android_log_print(ANDROID_LOG_DEBUG, "agora-raw-data-plugin", "loadAgoraRtcEnginePlugin");
_237
rtcEngine = engine;
_237
return 0;
_237
}
_237
_237
void __attribute__((visibility("default")))
_237
unloadAgoraRtcEnginePlugin(agora::rtc::IRtcEngine *engine)
_237
{
_237
__android_log_print(ANDROID_LOG_DEBUG, "agora-raw-data-plugin", "unloadAgoraRtcEnginePlugin");
_237
_237
rtcEngine = nullptr;
_237
}
_237
_237
_237
...
_237
_237
_237
// For the Java interface file, use the JNI to export corresponding C++ methods. The Java_io_agora_advancedvideo_rawdata_MediaPreProcessing_setCallback method corresponds to the setCallback method in the Java interface file.
_237
JNIEXPORT void JNICALL Java_io_agora_advancedvideo_rawdata_MediaPreProcessing_setCallback
_237
(JNIEnv *env, jclass, jobject callback)
_237
{
_237
if (!rtcEngine) return;
_237
_237
env->GetJavaVM(&gJVM);
_237
// Create an AutoPtr instance that uses the IMediaEngine class as the template
_237
agora::util::AutoPtr<agora::media::IMediaEngine> mediaEngine;
_237
// The AutoPtr instance calls the queryInterface method to get a pointer to the IMediaEngine instance from the IID.
_237
// The AutoPtr instance accesses the pointer to the IMediaEngine instance via the arrow operator and calls the registerVideoFrameObserver method via the IMediaEngine instance.
_237
mediaEngine.queryInterface(rtcEngine, agora::INTERFACE_ID_TYPE::AGORA_IID_MEDIA_ENGINE);
_237
if (mediaEngine)
_237
{
_237
_237
...
_237
_237
_237
// Register the audio frame observer
_237
int ret = mediaEngine->registerAudioFrameObserver(&s_audioFrameObserver);
_237
_237
}
_237
_237
if (gCallBack == nullptr)
_237
{
_237
gCallBack = env->NewGlobalRef(callback);
_237
gCallbackClass = env->GetObjectClass(gCallBack);
_237
_237
// Get the MethodId of each callback function through the callback object
_237
recordAudioMethodId = env->GetMethodID(gCallbackClass, "onRecordAudioFrame", "(IIIIIJI)V");
_237
playbackAudioMethodId = env->GetMethodID(gCallbackClass, "onPlaybackAudioFrame",
_237
"(IIIIIJI)V");
_237
playBeforeMixAudioMethodId = env->GetMethodID(gCallbackClass,
_237
"onPlaybackAudioFrameBeforeMixing",
_237
"(IIIIIIJI)V");
_237
mixAudioMethodId = env->GetMethodID(gCallbackClass, "onMixedAudioFrame", "(IIIIIJI)V");
_237
_237
...
_237
_237
__android_log_print(ANDROID_LOG_DEBUG, "setCallback", "setCallback done successfully");
_237
}
_237
_237
}
_237
_237
...
_237
_237
// C++ implementation of setAudioRecordByteBuffer in the Java interface file
_237
JNIEXPORT void JNICALL
_237
Java_io_agora_advancedvideo_rawdata_MediaPreProcessing_setAudioRecordByteBuffer
_237
(JNIEnv *env, jclass, jobject bytebuffer)
_237
{
_237
_javaDirectPlayBufferRecordAudio = env->GetDirectBufferAddress(bytebuffer);
_237
}
_237
// C++ implementation of setAudioPlayByteBuffer in the Java interface file
_237
JNIEXPORT void JNICALL Java_io_agora_advancedvideo_rawdata_MediaPreProcessing_setAudioPlayByteBuffer
_237
(JNIEnv *env, jclass, jobject bytebuffer)
_237
{
_237
_javaDirectPlayBufferPlayAudio = env->GetDirectBufferAddress(bytebuffer);
_237
}
_237
// C++ implementation of setBeforeAudioMixByteBuffer in the Java interface file
_237
JNIEXPORT void JNICALL
_237
Java_io_agora_advancedvideo_rawdata_MediaPreProcessing_setBeforeAudioMixByteBuffer
_237
(JNIEnv *env, jclass, jobject bytebuffer)
_237
{
_237
_javaDirectPlayBufferBeforeMixAudio = env->GetDirectBufferAddress(bytebuffer);
_237
}
_237
// C++ implementation of setAudioMixByteBuffer in the Java interface file
_237
JNIEXPORT void JNICALL Java_io_agora_advancedvideo_rawdata_MediaPreProcessing_setAudioMixByteBuffer
_237
(JNIEnv *env, jclass, jobject bytebuffer)
_237
{
_237
_javaDirectPlayBufferMixAudio = env->GetDirectBufferAddress(bytebuffer);
_237
}
_237
_237
}
_237
_237
_237
...
_237
_237
_237
#ifdef __cplusplus
_237
}
_237
#endif

  1. Build the C++ interface file via the NDK to generate a .so library. Use the System.loadLibrary() method to load the generated .so library in the Java interface file. See the following CMake file.

_25
cmake_minimum_required(VERSION 3.4.1)
_25
_25
add_library( # Sets the name of the library.
_25
apm-plugin-raw-data
_25
_25
# Sets the library as a shared library.
_25
SHARED
_25
_25
# Provides a relative path to your source file(s).
_25
src/main/cpp/io_agora_advancedvideo_rawdata_MediaPreProcessing.cpp)
_25
_25
_25
find_library( # Sets the name of the path variable.
_25
log-lib
_25
_25
# Specifies the name of the NDK library that
_25
# you want CMake to locate.
_25
log)
_25
_25
target_link_libraries( # Specifies the target library.
_25
apm-plugin-raw-data
_25
_25
# Links the target library to the log library
_25
# included in the NDK.
_25
${log-lib})

Implement the raw audio data function in a Java project
  1. Implement an interface that maps to the C++ methods in a Java interface file.

_68
// Implement the ProgressCallback interface in Java
_68
public class MediaDataObserverPlugin implements MediaPreProcessing.ProgressCallback {
_68
_68
_68
...
_68
_68
// Get the recorded audio frame
_68
@Override
_68
public void onRecordAudioFrame(int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
_68
byte[] buf = new byte[bufferLength];
_68
byteBufferAudioRecord.limit(bufferLength);
_68
byteBufferAudioRecord.get(buf);
_68
byteBufferAudioRecord.flip();
_68
_68
for (MediaDataAudioObserver observer : audioObserverList) {
_68
observer.onRecordAudioFrame(buf, audioFrameType, samples, bytesPerSample, channels, samplesPerSec, renderTimeMs, bufferLength);
_68
}
_68
_68
byteBufferAudioRecord.put(buf);
_68
byteBufferAudioRecord.flip();
_68
}
_68
_68
// Get the playback audio frame
_68
@Override
_68
public void onPlaybackAudioFrame(int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
_68
byte[] buf = new byte[bufferLength];
_68
byteBufferAudioPlay.limit(bufferLength);
_68
byteBufferAudioPlay.get(buf);
_68
byteBufferAudioPlay.flip();
_68
_68
for (MediaDataAudioObserver observer : audioObserverList) {
_68
observer.onPlaybackAudioFrame(buf, audioFrameType, samples, bytesPerSample, channels, samplesPerSec, renderTimeMs, bufferLength);
_68
}
_68
_68
byteBufferAudioPlay.put(buf);
_68
byteBufferAudioPlay.flip();
_68
}
_68
// Get the playback audio frame before mixing
_68
@Override
_68
public void onPlaybackAudioFrameBeforeMixing(int uid, int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
_68
byte[] buf = new byte[bufferLength];
_68
byteBufferBeforeAudioMix.limit(bufferLength);
_68
byteBufferBeforeAudioMix.get(buf);
_68
byteBufferBeforeAudioMix.flip();
_68
_68
for (MediaDataAudioObserver observer : audioObserverList) {
_68
observer.onPlaybackAudioFrameBeforeMixing(uid, buf, audioFrameType, samples, bytesPerSample, channels, samplesPerSec, renderTimeMs, bufferLength);
_68
}
_68
_68
byteBufferBeforeAudioMix.put(buf);
_68
byteBufferBeforeAudioMix.flip();
_68
}
_68
// Get the mixed audio frame
_68
@Override
_68
public void onMixedAudioFrame(int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
_68
byte[] buf = new byte[bufferLength];
_68
byteBufferAudioMix.limit(bufferLength);
_68
byteBufferAudioMix.get(buf);
_68
byteBufferAudioMix.flip();
_68
_68
for (MediaDataAudioObserver observer : audioObserverList) {
_68
observer.onMixedAudioFrame(buf, audioFrameType, samples, bytesPerSample, channels, samplesPerSec, renderTimeMs, bufferLength);
_68
}
_68
_68
byteBufferAudioMix.put(buf);
_68
byteBufferAudioMix.flip();
_68
}
_68
}

  1. Call the setCallback method. The setCallback method calls the registerAudioFrameObserver C++ method via JNI to register an audio frame observer.

_11
@Override
_11
public void onActivityCreated(@Nullable Bundle savedInstanceState) {
_11
super.onActivityCreated(savedInstanceState);
_11
mediaDataObserverPlugin = MediaDataObserverPlugin.the();
_11
// Registers the audio frame observer
_11
MediaPreProcessing.setCallback(mediaDataObserverPlugin);
_11
_11
_11
...
_11
_11
}

  1. Implement the onRecordAudioFrame, onPlaybackAudioFrame, onPlaybackAudioFrameBeforeMixing, and onMixedAudioFrame callbacks. Get the audio frames from the callbacks, and process the audio frames.

_24
// Get the recorded audio frame
_24
@Override
_24
public void onRecordAudioFrame(byte[] data, int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
_24
_24
}
_24
_24
// Get the playback audio frame
_24
@Override
_24
public void onPlaybackAudioFrame(byte[] data, int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
_24
_24
}
_24
_24
_24
// Get the playback audio frame before mixing
_24
@Override
_24
public void onPlaybackAudioFrameBeforeMixing(int uid, byte[] data, int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
_24
_24
}
_24
_24
// Get the mixed audio frame
_24
@Override
_24
public void onMixedAudioFrame(byte[] data, int audioFrameType, int samples, int bytesPerSample, int channels, int samplesPerSec, long renderTimeMs, int bufferLength) {
_24
_24
}

API reference

Interactive Live Streaming