Custom audio source
By default, Voice SDK uses the standard audio module on the device your app runs on. However, there are certain scenarios where you want to integrate a custom audio source into your app, such as:
-
Your app has its own audio module.
-
You want to use a non-microphone source, such as recorded audio data.
-
You need to process the captured audio with a pre-processing library for audio enhancement.
-
You need flexible device resource allocation to avoid conflicts with other services.
Understand the tech
To set an external audio source, you configure the Agora Engine before joining a channel. To manage the capture and processing of audio frames, you use methods from outside the Voice SDK that are specific to your custom source. Voice SDK enables you to push processed audio data to subscribers in a channel.
The following figure shows the workflow you need to implement to stream a custom audio source in your app.
Prerequisites
To test the code used in this page you need to have:
- Implemented either of the following:
-
A computer with Internet access.
Ensure that no firewall is blocking your network communication.
Integrate custom audio or video
To stream from a custom source, you convert the data stream into a suitable format and push this data using Video SDK.
Implement a custom audio source
To push audio from a custom source to a channel, take the following steps:
Add the required imports
Add the required variables
Enable custom audio track publishing
To enable custom audio track publishing, you set ChannelMediaOptions
to disable the microphone audio track and enable the custom audio track. You also enable custom audio local playback and set the external audio source.
Read the input stream into a buffer
You read data from the input stream into a buffer.
Push the audio frames
You push the data in the buffer as an audio frame using a separate process.
Test custom streams
To ensure that you have implemented streaming from a custom source into your app:
-
Load the web demo
-
Generate a temporary token in Agora Console
-
In your browser, navigate to the Agora web demo and update App ID, Channel, and Token with the values for your temporary token, then click Join.
-
-
Clone the documentation reference app
-
Configure the project
-
Open the file
<samples-root>/agora-manager/res/raw/config.json
-
Set
appId
to the AppID of your project. -
Choose one of the following authentication methods:
- Temporary token
- Generate an RTC token using your
uid
andchannelName
and setrtcToken
to this value inconfig.json
. - Set
channelName
to the name of the channel you used to create thertcToken
.
- Generate an RTC token using your
- Authentication server
- Setup an Authentication server
- In
config.json
, set:channelName
to the name of a channel you want to join.token
andrtcToken
to empty strings.serverUrl
to the base URL for your token server. For example:https://agora-token-service-production-yay.up.railway.app
.
- Temporary token
-
-
Run the reference app
- In Android Studio, connect a physical Android device to your development machine.
- Click Run to launch the app.
- A moment later you see the project installed on your device.
If this is the first time you run the project, grant microphone access to the app.
-
Choose this sample in the reference app
From the main screen of the app, choose Voice Calling from the dropdown and then select Custom video and audio.
-
Test the custom audio source
Press Join. You hear the audio file streamed to the web demo app.
To use this code for streaming data from your particular custom audio source, modify the
readBuffer()
method to read the audio data from your source, instead of a raw audio file.
Reference
This section contains content that completes the information on this page, or points you to documentation that explains other aspects to this product.