Skip to main content

You are viewing Agora Docs forBetaproducts and features. Switch to Docs

Android
iOS
macOS
Web
Windows
Electron
Flutter
React Native
React JS
Unity
Unreal Engine
Unreal (Blueprint)

Custom audio source

The default audio module of Video SDK meets the need of using basic audio functions in your app. For adding advanced audio functions, Video SDK supports using custom audio sources and custom audio rendering modules.

Video SDK uses the basic audio module on the device your app runs on by default. However, there are certain scenarios where you want to integrate a custom audio source into your app, such as:

  • Your app has its own audio module.
  • You need to process the captured audio with a pre-processing library for audio enhancement.
  • You need flexible device resource allocation to avoid conflicts with other services.

This page shows you how to capture and render audio from custom sources.

Understand the tech

To set an external audio source, you configure the Agora Engine before joining a channel. To manage the capture and processing of audio frames, you use methods from outside the Video SDK that are specific to your custom source. Video SDK enables you to push processed audio data to the subscribers in a channel.

Custom audio capture

The following figure illustrates the process of custom audio capture.

Audio data transmission

  • You implement the capture module using external methods provided by the SDK.
  • You call pushExternalAudioFrame to send the captured audio frames to the SDK.

Custom audio rendering

The following figure illustrates the process of custom audio rendering.

Audio Data Transmission

  • You implement the rendering module using external methods provided by the SDK.
  • You call pullPlaybackAudioFrame to retrieve the audio data sent by remote users.

Prerequisites

Ensure that you have implemented the SDK quickstart in your project.

Implementation

This section shows you how to implement custom audio capture and render audio from a custom source.

Custom audio capture

Refer to the following call sequence diagram to implement custom audio capture in your app:

Custom audio capture

Follow these steps to implement custom audio capture in your project:

  1. After initializing RtcEngine, call createCustomAudioTrack to create a custom audio track and obtain the audio track ID.


    _3
    AudioTrackConfig config = new AudioTrackConfig();
    _3
    config.enableLocalPlayback = false;
    _3
    customAudioTrack = engine.createCustomAudioTrack(Constants.AudioTrackType.AUDIO_TRACK_MIXABLE, config);

  2. Call joinChannel to join the channel. In ChannelMediaOptions, set publishCustomAudioTrackId to the audio track ID obtained in step 1, and set publishCustomAudioTrack to true to publish the custom audio track.

    Information

    To use enableCustomAudioLocalPlayback for local playback of an external audio source, or to adjust the volume of a custom audio track with adjustCustomAudioPlayoutVolume, set enableAudioRecordingOrPlayout to true in ChannelMediaOptions.


    _12
    ChannelMediaOptions option = new ChannelMediaOptions();
    _12
    option.clientRoleType = Constants.CLIENT_ROLE_BROADCASTER;
    _12
    option.autoSubscribeAudio = true;
    _12
    option.autoSubscribeVideo = true;
    _12
    // In the audio self-collection scenario, the audio collected by the microphone is not published
    _12
    option.publishMicrophoneTrack = false;
    _12
    // Publish the custom audio track
    _12
    option.publishCustomAudioTrack = true;
    _12
    // Set the custom audio track ID
    _12
    option.publishCustomAudioTrackId = customAudioTrack;
    _12
    // Join the channel
    _12
    int res = engine.joinChannel(accessToken, channelId, 0, option);

  3. Agora provides the AudioFileReader.java sample to demonstrate how to read and publish PCM-format audio data from a local file. In a production environment, you create a custom audio acquisition module based on your business needs.

  4. Call pushExternalAudioFrame to send the captured audio frame to the SDK through the custom audio track. Ensure that the trackId matches the audio track ID you obtained by calling createCustomAudioTrack. Set sampleRate, channels, and bytesPerSample to define the sampling rate, number of channels, and bytes per sample of the external audio frame.

    Information

    For audio and video synchronization, Agora recommends calling getCurrentMonotonicTimeInMs to get the system’s current monotonic time and setting the timestamp accordingly.


    _11
    audioPushingHelper = new AudioFileReader(requireContext(), (buffer, timestamp) -> {
    _11
    if (joined && engine != null && customAudioTrack != -1) {
    _11
    // Push external audio frames to SDK
    _11
    int ret = engine.pushExternalAudioFrame(buffer, timestamp,
    _11
    AudioFileReader.SAMPLE_RATE,
    _11
    AudioFileReader.SAMPLE_NUM_OF_CHANNEL,
    _11
    Constants.BytesPerSample.TWO_BYTES_PER_SAMPLE,
    _11
    customAudioTrack);
    _11
    Log.i(TAG, "pushExternalAudioFrame times:" + (++pushTimes) + ", ret=" + ret);
    _11
    }
    _11
    });

  5. To stop publishing custom audio, call destroyCustomAudioTrack to destroy the custom audio track.


    _2
    // Destroy the custom audio track
    _2
    engine.destroyCustomAudioTrack(customAudioTrack);

Custom audio rendering

This section shows you how to implement custom audio rendering. Refer to the following call sequence diagram to implement custom audio rendering in your app:

Custom Audio Rendering Workflow

To implement custom audio rendering, use the following methods:

  1. Before calling joinChannel, use setExternalAudioSink to enable and configure custom audio rendering.


    _5
    rtcEngine.setExternalAudioSink(
    _5
    true, // Enable custom audio rendering
    _5
    44100, // Sampling rate (Hz). Set this value to 16000, 32000, 441000, or 48000
    _5
    1 // Number of channels for the custom audio source. Set this value to 1 or 2
    _5
    );

  2. After joining the channel, call pullPlaybackAudioFrame to get audio data sent by remote users. Use your own audio renderer to process the audio data and then play the rendered data.


    _20
    private class FileThread implements Runnable {
    _20
    _20
    @Override
    _20
    public void run() {
    _20
    while (mPull) {
    _20
    int lengthInByte = 48000 / 1000 * 2 * 1 * 10;
    _20
    ByteBuffer frame = ByteBuffer.allocateDirect(lengthInByte);
    _20
    int ret = engine.pullPlaybackAudioFrame(frame, lengthInByte);
    _20
    byte[] data = new byte[frame.remaining()];
    _20
    frame.get(data, 0, data.length);
    _20
    // Write to a local file or render using a player
    _20
    FileIOUtils.writeFileFromBytesByChannel("/sdcard/agora/pull_48k.pcm", data, true, true);
    _20
    try {
    _20
    Thread.sleep(10);
    _20
    } catch (InterruptedException e) {
    _20
    e.printStackTrace();
    _20
    }
    _20
    }
    _20
    }
    _20
    }

Using raw audio data callback

This section explains how to implement custom audio rendering.

To retrieve audio data for playback, implement collection and processing of raw audio data. Refer to Raw audio processing.

Follow these steps to call the raw audio data API in your project for custom audio rendering:

  1. Retrieve audio data for playback using the onRecordAudioFrame, onPlaybackAudioFrame, onMixedAudioFrame, or onPlaybackAudioFrameBeforeMixing callback.

  2. Independently render and play the audio data.

Reference

This section explains how to implement different sound effects and audio mixing in your app, covering essential steps and code snippets.

Interactive Live Streaming