Skip to main content

You are viewing Agora Docs forBetaproducts and features. Switch to Docs

Android
iOS
macOS
Web
Windows
Electron
Flutter
React Native
React JS
Unity
Unreal Engine
Unreal (Blueprint)

Custom video source

Custom video capture refers to the collection of a video stream from a custom source. Unlike the default video capture method, custom video capture enables you to control the capture source, and precisely adjust video attributes. You can dynamically adjust parameters such as video quality, resolution, and frame rate to adapt to various application scenarios. For example, you can capture video from high-definition cameras, and drone cameras.

Agora recommends default video capture for its stability, reliability, and ease of integration. Custom video capture offers flexibility and customization for specific video capture scenarios where default video capture does not fulfill your requirements.

Understand the tech

Video SDK provides a custom video track method for video self-collection. You create one or more custom video tracks, join channels and publish the created video tracks in each channel. You use the self-capture module to drive the capture device, and send the captured video frames to the SDK through the video track.

The following figure shows the video data transmission process when custom video capture is implemented in a single channel or multiple channels:

  • Publish to a single channel

    Custom video source

  • Publish to multiple channels

    Custom video source

Applicable scenarios

Use custom video capture in the following industries and scenarios:

Specialized video processing and enhancement

In specific gaming or virtual reality scenarios, real-time effects processing, filter handling, or other enhancement effects necessitate direct access to the original video stream. Custom video capture facilitates this, enabling seamless real-time processing and enhances the overall gaming or virtual reality experience for a more realistic outcome.

High-precision video capture

In video surveillance applications, detailed observation and analysis of scene details is necessary. Custom video capture enables higher image quality and finer control over capture to meet the requirements of video monitoring.

Capture from specific video sources

Industries such as IoT and live streaming often require the use of specific cameras, monitoring devices, or non-camera video sources, such as video capture cards or screen recording data. In such situations, default Video SDK capture may not meet your requirements, necessitating use of custom video capture.

Seamless integration with specific devices or third-party applications

In smart home or IoT applications, transmitting video from devices to users' smartphones or computers for monitoring and control may require the use of specific devices or applications for video capture. Custom video capture facilitates seamless integration of specific devices or applications with the Video SDK.

Specific video encoding formats

In certain live streaming scenarios, specific video encoding formats may be needed to meet business requirements. In such cases, Video SDK default capture might not suffice, and custom video capture is required to capture and encode videos in specific formats.

Advantages

Using custom video capture offers the following advantages:

More types of video streams

Custom video capture allows the use of higher quality and a greater variety of capture devices and cameras, resulting in clearer and smoother video streams. This enhances the user viewing experience and makes the product more competitive.

More flexible video effects

Custom video capture enables you to implement richer and more personalized video effects and filters, enhancing the user experience. You can implement effects such as beautification filters and dynamic stickers.

Adaptation to diverse scenario requirements

Custom video capture helps applications better adapt to the requirements of various scenarios, such as live streaming, video conferencing, and online education. You can customize different video capture solutions based on the scenario requirements to provide a more robust application.

Prerequisites

Ensure that you have implemented the SDK quickstart in your project.

Implement the logic

Custom video capture

The following figure shows the workflow you implement to capture and stream a custom video source in your app.

API call sequence

Take the following steps to implement this workflow:

  1. Create a custom video track

    To create a custom video track and obtain the video track ID, call createCustomVideoTrack after initializing an instance of RtcEngine. To create multiple custom video tracks, call the method multiple times.


    _1
    int videoTrackId = RtcEngine.createCustomVideoTrack();

  2. Join a channel and publish the custom video track

    Call joinChannel [2/2] to join a channel or joinChannelEx to join multiple channels. In the ChannelMediaOptions for each channel, set the customVideoTrackId to the video track ID you obtained in the previous step. Set publishCustomVideoTrack to true to publish the specified custom video track.

    • Join a single channel


      _10
      ChannelMediaOptions option = new ChannelMediaOptions();
      _10
      option.clientRoleType = Constants.CLIENT_ROLE_BROADCASTER;
      _10
      option.autoSubscribeAudio = true;
      _10
      option.autoSubscribeVideo = true;
      _10
      // Publish self-captured video stream
      _10
      option.publishCustomVideoTrack = true;
      _10
      // Set custom video track ID
      _10
      option.customVideoTrackId = videoTrackId;
      _10
      // Join a single channel
      _10
      int res = engine.joinChannel(accessToken, channelId, 0, option);

    • Publish custom video to multiple channels


      _11
      // Set ChannelMediaOptions multiple times and call joinChannelEx multiple times
      _11
      ChannelMediaOptions option = a ChannelMediaOptions();
      _11
      option.clientRoleType = Constants.CLIENT_ROLE_BROADCASTER;
      _11
      option.autoSubscribeAudio = true;
      _11
      option.autoSubscribeVideo = true;
      _11
      // Publish self-captured video stream
      _11
      option.publishCustomVideoTrack = true;
      _11
      // Set custom video track ID
      _11
      option.customVideoTrackId = videoTrackId;
      _11
      // Join multiple channels
      _11
      int res = engine.joinChannelEx(accessToken, connection, option, new IRtcEngineEventHandler() {});

  3. Implement your self-capture module

    Agora provides the VideoFileReader demo project that shows you how to read YUV format video data from a local file. In a production environment, create a custom video module for your device using Video SDK based on your business requirements.

  4. Push video data to the SDK

    Before sending captured video frames to Video SDK, integrate your video module with the VideoFrame. To ensure audio-video synchronization, best practice is to obtain the current monotonic time from Video SDK and pass it as the timestamp parameter in the VideoFrame.

    Information

    To ensure audio-video synchronization, set the timestamp parameter of VideoFrame to the system's Monotonic Time. Use getCurrentMonotonicTimeInMs to obtain the current monotonic Time.

    Call pushExternalVideoFrameById [2/2] to push the captured video frames through the video track to Video SDK. Ensure that the videoTrackId matches the track ID you specified when joining the channel. Customize parameters like pixel format, data type, and timestamp in the VideoFrame.

    The following code samples demonstrate pushing I420, NV21, NV12, and Texture format video data:

    I420

    _20
    private void pushVideoFrameByI420(int trackId, byte[] yuv, int width, int height) {
    _20
    // Create an i420Buffer object and store the original YUV data in the buffer
    _20
    JavaI420Buffer i420Buffer = JavaI420Buffer.allocate(width, height);
    _20
    i420Buffer.getDataY().put(yuv, 0, i420Buffer.getDataY().limit());
    _20
    i420Buffer.getDataU().put(yuv, i420Buffer.getDataY().limit(), i420Buffer.getDataU().limit());
    _20
    i420Buffer.getDataV().put(yuv, i420Buffer.getDataY().limit() + i420Buffer.getDataU().limit(), i420Buffer.getDataV().limit());
    _20
    // Get the current monotonic time from the SDK
    _20
    long currentMonotonicTimeInMs = engine.getCurrentMonotonicTimeInMs();
    _20
    // Create a VideoFrame object, passing the I420 video frame to be pushed and the monotonic time of the video frame (in nanoseconds)
    _20
    VideoFrame videoFrame = new VideoFrame(i420Buffer, 0, currentMonotonicTimeInMs * 1000000);
    _20
    _20
    // Push the video frame to the SDK through the video track
    _20
    int ret = engine.pushExternalVideoFrameById(videoFrame, trackId);
    _20
    // Release the memory resources occupied by the i420Buffer object
    _20
    i420Buffer.release();
    _20
    _20
    if (ret != Constants.ERR_OK) {
    _20
    Log.w(TAG, "pushExternalVideoFrame error");
    _20
    }
    _20
    }

    NV21

    _16
    private void pushVideoFrameByNV21(int trackId, byte[] nv21, int width, height) {
    _16
    // Create a frameBuffer object and store the original YUV data in the NV21 format buffer
    _16
    VideoFrame.Buffer frameBuffer = new NV21Buffer(nv21, width, height, null);
    _16
    _16
    // Get the current monotonic time from the SDK
    _16
    long currentMonotonicTimeInMs = engine.getCurrentMonotonicTimeInMs();
    _16
    // Create a VideoFrame object, pass the NV21 video frame to be pushed and the monotonic time of the video frame (in nanoseconds)
    _16
    VideoFrame videoFrame = new VideoFrame(frameBuffer, 0, currentMonotonicTimeInMs * 1000000);
    _16
    _16
    // Push the video frame to the SDK through the video track
    _16
    int ret = engine.pushExternalVideoFrameById(videoFrame, trackId);
    _16
    _16
    if (ret != Constants.ERR_OK) {
    _16
    Log.w(TAG, "pushExternalVideoFrame error");
    _16
    }
    _16
    }

    NV12

    _16
    private void pushVideoFrameByNV12(int trackId, ByteBuffer nv12, int width, int height) {
    _16
    // Create a frameBuffer object and store the original YUV data in the NV12 format buffer
    _16
    VideoFrame.Buffer frameBuffer = new NV12Buffer(width, height, width, height, nv12, null);
    _16
    _16
    // Get the current monotonic time from the SDK
    _16
    long currentMonotonicTimeInMs = engine.getCurrentMonotonicTimeInMs();
    _16
    // Create a VideoFrame object, pass the NV12 video frame to be pushed and the monotonic time of the video frame (in nanoseconds)
    _16
    VideoFrame videoFrame = new VideoFrame(frameBuffer, 0, currentMonotonicTimeInMs * 1000000);
    _16
    _16
    // Push the video frame to the SDK through the video track
    _16
    int ret = engine.pushExternalVideoFrameById(videoFrame, trackId);
    _16
    _16
    if (ret != Constants.ERR_OK) {
    _16
    Log.w(TAG, "pushExternalVideoFrame error");
    _16
    }
    _16
    }

    Texture

    _25
    private void pushVideoFrameByTexture(int trackId, int textureId, VideoFrame.TextureBuffer.Type textureType, int width, int height) {
    _25
    // Create a frameBuffer object to store the texture format video frame
    _25
    VideoFrame.Buffer frameBuffer = new TextureBuffer(
    _25
    EglBaseProvider.getCurrentEglContext(),
    _25
    width,
    _25
    height,
    _25
    textureType,
    _25
    textureId,
    _25
    new Matrix(),
    _25
    null,
    _25
    null,
    _25
    null
    _25
    );
    _25
    // Get the current monotonic time from the SDK
    _25
    long currentMonotonicTimeInMs = engine.getCurrentMonotonicTimeInMs();
    _25
    // Create a VideoFrame object, passing the texture video frame to be pushed and the monotonic time of the video frame (in nanoseconds)
    _25
    VideoFrame videoFrame = new VideoFrame(frameBuffer, 0, currentMonotonicTimeInMs * 1000000);
    _25
    _25
    // Push the video frame to the SDK through the video track
    _25
    int ret = engine.pushExternalVideoFrameById(videoFrame, trackId);
    _25
    _25
    if (ret != Constants.ERR_OK) {
    _25
    Log.w(TAG, "pushExternalVideoFrame error");
    _25
    }
    _25
    }

    Information

    If the captured custom video format is Texture and remote users experience flickering or distortion in the captured video, it is recommended to first duplicate the video data and then send both the original and duplicated video data back to the Video SDK. This helps eliminate anomalies during internal data encoding processes.

  5. Destroy custom video tracks

    To stop custom video capture and destroy the video track, call destroyCustomVideoTrack. To destroy multiple video tracks, call the method for each track.


    _4
    // Destroy custom video track
    _4
    engine.destroyCustomVideoTrack(videoTrack);
    _4
    // Leave the channel
    _4
    engine.leaveChannelEx(connection);

Custom video rendering

To implement custom video rendering in your app, refer to the following steps:

  1. Set up onCaptureVideoFrame or onRenderVideoFrame callback to obtain the video data to be played.
  2. Implement video rendering and playback yourself.

Reference

This section contains information that completes the information in this page, or points you to documentation that explains other aspects to this product.

Sample projects

Agora provides the following open-source sample projects for your reference. Download the project or view the source code for a more detailed example.

API reference

Video Calling