Skip to main content

You are viewing Agora Docs forBetaproducts and features. Switch to Docs

Android
iOS
macOS
Web
Windows
Electron
Flutter
React Native
Python
React JS
Unity
Unreal Engine
Unreal (Blueprint)

Custom audio source

By default, Voice SDK uses the standard audio module on the device your app runs on. However, there are certain scenarios where you want to integrate a custom audio source into your app, such as:

  • Your app has its own audio module.

  • You want to use a non-microphone source, such as recorded audio data.

  • You need to process the captured audio with a pre-processing library for audio enhancement.

  • You need flexible device resource allocation to avoid conflicts with other services.

Understand the tech

To set an external audio source, you configure the Agora Engine before joining a channel. To manage the capture and processing of audio frames, you use methods from outside the Voice SDK that are specific to your custom source. Voice SDK enables you to push processed audio data to subscribers in a channel.

The following figure shows the workflow you need to implement to stream a custom audio source in your app.

Process custom audio

Prerequisites

To test the code used in this page you need to have:

  • An Agora account and project.

  • A computer with Internet access.

    Ensure that no firewall is blocking your network communication.

Integrate custom audio or video

To stream from a custom source, you convert the data stream into a suitable format and push this data using Video SDK.

Implement a custom audio source

To push audio from a custom source to a channel, take the following steps:

Add the required imports


_5
import io.agora.rtc2.ChannelMediaOptions
_5
import io.agora.rtc2.Constants
_5
import io.agora.rtc2.audio.AudioTrackConfig
_5
import java.io.IOException
_5
import java.io.InputStream

Add the required variables


_12
// Custom audio parameters
_12
private var customAudioTrackId = -1
_12
private val audioFile = "applause.wav" // raw audio file
_12
private val sampleRate = 44100
_12
private val numberOfChannels = 2
_12
private val bitsPerSample = 16
_12
private val samples = 441
_12
private val bufferSize = samples * bitsPerSample / 8 * numberOfChannels
_12
private val pushInterval = samples * 1000 / sampleRate
_12
private var inputStream: InputStream? = null
_12
private var pushingTask: Thread? = null
_12
var pushingAudio = false

Enable custom audio track publishing

To enable custom audio track publishing, you set ChannelMediaOptions to disable the microphone audio track and enable the custom audio track. You also enable custom audio local playback and set the external audio source.


_40
fun playCustomAudio() {
_40
// Create a custom audio track
_40
val audioTrackConfig = AudioTrackConfig()
_40
audioTrackConfig.enableLocalPlayback = true
_40
_40
customAudioTrackId = agoraEngine!!.createCustomAudioTrack(
_40
Constants.AudioTrackType.AUDIO_TRACK_MIXABLE,
_40
audioTrackConfig
_40
)
_40
_40
// Set custom audio publishing options
_40
val options = ChannelMediaOptions()
_40
options.publishCustomAudioTrack = true // Enable publishing custom audio
_40
options.publishCustomAudioTrackId = customAudioTrackId
_40
options.publishMicrophoneTrack = false // Disable publishing microphone-captured audio
_40
agoraEngine!!.updateChannelMediaOptions(options)
_40
_40
// Open the audio file
_40
openAudioFile()
_40
_40
// Start the pushing task
_40
pushingTask = Thread(PushingTask(this))
_40
pushingAudio = true
_40
pushingTask?.start()
_40
}
_40
_40
private fun openAudioFile() {
_40
// Open the audio file
_40
try {
_40
inputStream = mContext.resources.assets.open(audioFile)
_40
// Use the inputStream as needed
_40
} catch (e: IOException) {
_40
e.printStackTrace()
_40
}
_40
}
_40
_40
fun stopCustomAudio() {
_40
pushingAudio = false
_40
pushingTask?.interrupt()
_40
}

Read the input stream into a buffer

You read data from the input stream into a buffer.


_14
private fun readBuffer(): ByteArray? {
_14
// Read the audio file buffer
_14
val byteSize = bufferSize
_14
val buffer = ByteArray(byteSize)
_14
try {
_14
if (inputStream!!.read(buffer) < 0) {
_14
inputStream!!.reset()
_14
return readBuffer()
_14
}
_14
} catch (e: IOException) {
_14
e.printStackTrace()
_14
}
_14
return buffer
_14
}

Push the audio frames

You push the data in the buffer as an audio frame using a separate process.


_24
internal class PushingTask(private val manager: CustomVideoAudioManager) : Runnable {
_24
override fun run() {
_24
Process.setThreadPriority(Process.THREAD_PRIORITY_URGENT_AUDIO)
_24
while (manager.pushingAudio) {
_24
val before = System.currentTimeMillis()
_24
manager.agoraEngine?.pushExternalAudioFrame(manager.readBuffer(),
_24
System.currentTimeMillis(),
_24
manager.sampleRate,
_24
manager.numberOfChannels,
_24
Constants.BytesPerSample.TWO_BYTES_PER_SAMPLE,
_24
manager.customAudioTrackId
_24
)
_24
val now = System.currentTimeMillis()
_24
val consuming = now - before
_24
if (consuming < manager.pushInterval) {
_24
try {
_24
Thread.sleep(manager.pushInterval - consuming)
_24
} catch (e: InterruptedException) {
_24
e.printStackTrace()
_24
}
_24
}
_24
}
_24
}
_24
}

Test custom streams

To ensure that you have implemented streaming from a custom source into your app:

  1. Load the web demo

    1. Generate a temporary token in Agora Console

    2. In your browser, navigate to the Agora web demo and update App ID, Channel, and Token with the values for your temporary token, then click Join.

  2. Clone the documentation reference app

  3. Configure the project

    1. Open the file <samples-root>/agora-manager/res/raw/config.json

    2. Set appId to the AppID of your project.

    3. Choose one of the following authentication methods:

      • Temporary token
        1. Generate an RTC token using your uid and channelName and set rtcToken to this value in config.json.
        2. Set channelName to the name of the channel you used to create the rtcToken.
      • Authentication server
        1. Setup an Authentication server
        2. In config.json, set:
          • channelName to the name of a channel you want to join.
          • token and rtcToken to empty strings.
          • serverUrl to the base URL for your token server. For example: https://agora-token-service-production-yay.up.railway.app.
  4. Run the reference app

    1. In Android Studio, connect a physical Android device to your development machine.
    2. Click Run to launch the app.
    3. A moment later you see the project installed on your device.

    If this is the first time you run the project, grant microphone access to the app.

  1. Choose this sample in the reference app

    From the main screen of the app, choose Voice Calling from the dropdown and then select Custom video and audio.

  2. Test the custom audio source

    Press Join. You hear the audio file streamed to the web demo app.

    To use this code for streaming data from your particular custom audio source, modify the readBuffer() method to read the audio data from your source, instead of a raw audio file.

Reference

This section contains content that completes the information on this page, or points you to documentation that explains other aspects to this product.

Voice Calling