Skip to main content

You are viewing Agora Docs forBetaproducts and features. Switch to Docs

Android
iOS
macOS
Web
Windows
Electron
Flutter
React Native
React JS
Unity
Unreal Engine
Unreal (Blueprint)

Custom audio source

The default audio module of Video SDK meets the need of using basic audio functions in your app. For adding advanced audio functions, Video SDK supports using custom audio sources and custom audio rendering modules.

Video SDK uses the basic audio module on the device your app runs on by default. However, there are certain scenarios where you want to integrate a custom audio source into your app, such as:

  • Your app has its own audio module.
  • You need to process the captured audio with a pre-processing library for audio enhancement.
  • You need flexible device resource allocation to avoid conflicts with other services.

This page shows you how to capture and render audio from custom sources.

Understand the tech

To set an external audio source, you configure the Agora Engine before joining a channel. To manage the capture and processing of audio frames, you use methods from outside the Video SDK that are specific to your custom source. Video SDK enables you to push processed audio data to the subscribers in a channel.

Custom audio capture

The following figure illustrates the process of custom audio capture.

Audio data transmission

  • You implement the capture module using external methods provided by the SDK.
  • You call pushExternalAudioFrame to send the captured audio frames to the SDK.

Custom audio rendering

The following figure illustrates the process of custom audio rendering.

Audio Data Transmission

  • You implement the rendering module using external methods provided by the SDK.
  • You call pullPlaybackAudioFrame to retrieve the audio data sent by remote users.

Prerequisites

Ensure that you have implemented the SDK quickstart in your project.

Implementation

This section shows you how to implement custom audio capture and render audio from a custom source.

Custom audio capture

Refer to the following call sequence diagram to implement custom audio capture in your app:

Custom audio capture

Follow these steps to implement custom audio capture in your project:

  1. Before calling joinChannel, use setExternalAudioSource to enable and configure your custom audio source.


    _7
    // Specify the custom audio source
    _7
    engine.setExternalAudioSource(true, DEFAULT_SAMPLE_RATE, DEFAULT_CHANNEL_COUNT, 1, true, true);
    _7
    // Join the channel for the local user
    _7
    ChannelMediaOptions option = new ChannelMediaOptions();
    _7
    option.autoSubscribeAudio = true;
    _7
    option.autoSubscribeVideo = true;
    _7
    int res = engine.joinChannel(accessToken, channelId, 0, option);

  2. Implement audio capture and processing using external SDK methods.

  3. Call pushExternalAudioFrame to send audio frames to the SDK.


    _1
    engine.pushExternalAudioFrame(ByteBuffer.wrap(buffer), 0, 0);

Custom audio rendering

This section shows you how to implement custom audio rendering. Refer to the following call sequence diagram to implement custom audio rendering in your app:

Custom Audio Rendering Workflow

To implement custom audio rendering, use the following methods:

  1. Before calling joinChannel, use setExternalAudioSink to enable and configure custom audio rendering.


    _5
    rtcEngine.setExternalAudioSink(
    _5
    true, // Enable custom audio rendering
    _5
    44100, // Sampling rate (Hz). Set this value to 16000, 32000, 441000, or 48000
    _5
    1 // Number of channels for the custom audio source. Set this value to 1 or 2
    _5
    );

  2. After joining the channel, call pullPlaybackAudioFrame to get audio data sent by remote users. Use your own audio renderer to process the audio data and then play the rendered data.


    _20
    private class FileThread implements Runnable {
    _20
    _20
    @Override
    _20
    public void run() {
    _20
    while (mPull) {
    _20
    int lengthInByte = 48000 / 1000 * 2 * 1 * 10;
    _20
    ByteBuffer frame = ByteBuffer.allocateDirect(lengthInByte);
    _20
    int ret = engine.pullPlaybackAudioFrame(frame, lengthInByte);
    _20
    byte[] data = new byte[frame.remaining()];
    _20
    frame.get(data, 0, data.length);
    _20
    // Write to a local file or render using a player
    _20
    FileIOUtils.writeFileFromBytesByChannel("/sdcard/agora/pull_48k.pcm", data, true, true);
    _20
    try {
    _20
    Thread.sleep(10);
    _20
    } catch (InterruptedException e) {
    _20
    e.printStackTrace();
    _20
    }
    _20
    }
    _20
    }
    _20
    }

Using raw audio data callback

This section explains how to implement custom audio rendering.

To retrieve audio data for playback, implement collection and processing of raw audio data. Refer to Raw audio processing.

Follow these steps to call the raw audio data API in your project for custom audio rendering:

  1. Retrieve audio data for playback using the onRecordAudioFrame, onPlaybackAudioFrame, onMixedAudioFrame, or onPlaybackAudioFrameBeforeMixing callback.

  2. Independently render and play the audio data.

Reference

This section explains how to implement different sound effects and audio mixing in your app, covering essential steps and code snippets.

Video Calling