Skip to main content

You are viewing Agora Docs forBetaproducts and features. Switch to Docs

Android
iOS
macOS
Web
Windows
Electron
Flutter
React Native
React JS
Unity
Unreal Engine
Unreal (Blueprint)

Alpha transparency effect

Portrait segmentation involves separating the broadcaster from their background, to facilitate dynamic background changes and effects in video streams. In various real-time audio and video interaction scenarios, applying portrait segmentation makes interactions more engaging, enhances immersion, and improves the overall experience.

Consider the following sample scenarios:

  • Broadcaster background replacement: The audience sees the broadcaster's background in the video replaced with a virtual scene, such as a gaming environment, a conference, or a tourist attractions.

  • Animated virtual gifts: Display dynamic animations with a transparent background to avoid obscuring live content when multiple video streams are merged.

  • Chroma keying during live game streaming: The audience sees the broadcaster's image cropped and positioned within the local game screen, making it appear as though the broadcaster is part of the game.


Prerequisites

Ensure that you have implemented the SDK quickstart in your project.

Implement Alpha transparency

Choose one of the following methods to implement the Alpha transparency effect based on your specific business scenario.

Custom video capture scenario

The implementation process for this scenario is illustrated in the figure below:

Custom Video Capture Process 1

Take the following steps to implement this logic:

  1. Process the captured video frames and generate Alpha data. You can choose from the following methods:

    • Method 1: Call the pushExternalVideoFrame[2/2] method and set the alphaBuffer parameter to specify Alpha channel data for the video frames. This data matches the size of the video frames, with each pixel value ranging from 0 to 255, where 0 represents the background and 255 represents the foreground.


      _5
      JavaI420Buffer javaI420Buffer = JavaI420Buffer.wrap(width, height, dataY, width, dataU, strideUV, dataV, strideUV, null);
      _5
      VideoFrame frame = new VideoFrame(javaI420Buffer, 0, timestamp);
      _5
      ByteBuffer alphaBuffer = ByteBuffer.allocateDirect(width * height);
      _5
      frame.fillAlphaData(alphaBuffer);
      _5
      rtcEngine.pushExternalVideoFrame(frame);

    • Method 2: Call the pushExternalVideoFrame method and use the setAlphaStitchMode method in the VideoFrame class to set the Alpha stitching mode. Construct a VideoFrame with the stitched Alpha data.


      _5
      JavaI420Buffer javaI420Buffer = JavaI420Buffer.wrap(width, height, dataY, width, dataU, strideUV, dataV, strideUV, null);
      _5
      VideoFrame frame = new VideoFrame(javaI420Buffer, 0, timestamp);
      _5
      // Set the Alpha stitching mode, in the example below, Alpha is set to be below the video image
      _5
      frame.setAlphaStitchMode(Constants.VIDEO_ALPHA_STITCH_BELOW);
      _5
      rtcEngine.pushExternalVideoFrame(frame);

  2. Render the view and implement the Alpha transparency effect.

    • Call the setupLocalVideo method to set up the local view and set the enableAlphaMask parameter to true to enable Alpha mask rendering.


      _7
      // Set the view to TextureView
      _7
      TextureView localView = new TextureView(context);
      _7
      // Enable transparent mode, allowing transparent portions in the view background and content
      _7
      localView.setOpaque(false);
      _7
      VideoCanvas localCanvas = new VideoCanvas(localView, renderMode, uid);
      _7
      localCanvas.enableAlphaMask = true;
      _7
      rtcEngine.setupLocalVideo(localCanvas);

    • Call the setupRemoteVideo method to set the view for displaying the remote video stream locally, and set the enableAlphaMask parameter to true to enable Alpha mask rendering.


      _12
      // The sender inputs Alpha data and enables Alpha transmission
      _12
      VideoEncoderConfiguration videoEncoderConfiguration = new VideoEncoderConfiguration(...);
      _12
      videoEncoderConfiguration.advanceOptions = new VideoEncoderConfiguration.AdvanceOptions(...);
      _12
      // Enable Alpha transmission when setting encoding parameters
      _12
      videoEncoderConfiguration.advanceOptions.encodeAlpha = true;
      _12
      rtcEngine.setVideoEncoderConfiguration(videoEncoderConfiguration)
      _12
      // Enable transparent mode at the receiving end
      _12
      TextureView remoteView = new TextureView(context);
      _12
      remoteView.setOpaque(false);
      _12
      VideoCanvas remoteCanvas = new VideoCanvas(remoteView, renderMode, uid);
      _12
      remoteCanvas.enableAlphaMask = true;
      _12
      rtcEngine.setupRemoteVideo(remoteCanvas)

SDK Capture Scenario

The implementation process for this scenario is illustrated in the following figure:

Custom Video Capture Process 2

Take the following steps to implement this logic:

  1. On the broadcasting end, call the enableVirtualBackground [2/2] method to enable the background segmentation algorithm and obtain the Alpha data for the portrait area. Set the parameters as follows:

    • enabled: Set to true to enable the virtual background.
    • backgroundSourceType: Set to BACKGROUND_NONE(0), to segment the portrait and background, and process the background as Alpha data.

    _4
    VirtualBackgroundSource virtualBackgroundSource = new VirtualBackgroundSource(...);
    _4
    virtualBackgroundSource.backgroundSourceType = VirtualBackgroundSource.BACKGROUND_NONE; // Only generate alpha data, no background replacement
    _4
    SegmentationProperty segmentationProperty = new SegmentationProperty(...);
    _4
    rtcEngine.enableVirtualBackground(true, virtualBackgroundSource, segmentationProperty, sourceType)

  2. Render the view and implement the Alpha transparency effect. See the steps in the Custom Video Capture Scenario for details.

Raw video data scenario

The implementation process for this scenario is illustrated in the following figure:

Custom Video Capture Process 3

Take the following steps to implement this logic:

  1. Call the registerVideoFrameObserver method to register a raw video frame observer and implement the corresponding callbacks as required.


    _22
    // Register IVideoFrameObserver
    _22
    public class MyVideoFrameObserver implements IVideoFrameObserver {
    _22
    @Override
    _22
    public boolean onRenderVideoFrame(String channelId, int uId, VideoFrame videoFrame) {
    _22
    // ...
    _22
    return false;
    _22
    }
    _22
    _22
    @Override
    _22
    public boolean onCaptureVideoFrame(int type, VideoFrame videoFrame) {
    _22
    // ...
    _22
    return false;
    _22
    }
    _22
    _22
    @Override
    _22
    public boolean onPreEncodeVideoFrame(int type, VideoFrame videoFrame) {
    _22
    // ...
    _22
    return false;
    _22
    }
    _22
    }
    _22
    MyVideoFrameObserver observer = new MyVideoFrameObserver();
    _22
    rtcEngine.registerVideoFrameObserver(observer);

  2. Use the onCaptureVideoFrame callback to obtain the captured video data and pre-process it as needed. You can modify the Alpha data or directly add Alpha data.


    _7
    public boolean onCaptureVideoFrame(int type, VideoFrame videoFrame) {
    _7
    // Modify Alpha data or directly add Alpha data
    _7
    ByteBuffer alphaBuffer = videoFrame.getAlphaBuffer();
    _7
    // ...
    _7
    videoFrame.fillAlphaData(byteBuffer);
    _7
    return false;
    _7
    }

  3. Use the onPreEncodeVideoFrame callback to obtain the local video data before encoding, and modify or directly add Alpha data as needed.


    _7
    public boolean onPreEncodeVideoFrame(int type, VideoFrame videoFrame) {
    _7
    // Modify Alpha data or directly add Alpha data
    _7
    ByteBuffer alphaBuffer = videoFrame.getAlphaBuffer();
    _7
    // ...
    _7
    videoFrame.fillAlphaData(byteBuffer);
    _7
    return false;
    _7
    }

  4. Use the onRenderVideoFrame callback to obtain the remote video data before rendering it locally. Modify the Alpha data, add Alpha data directly, or render the video image yourself based on the obtained Alpha data.


    _7
    public boolean onRenderVideoFrame(int type, VideoFrame videoFrame) {
    _7
    // Modify Alpha data, directly add Alpha data, or render the video image yourself based on the obtained Alpha data
    _7
    ByteBuffer alphaBuffer = videoFrame.getAlphaBuffer();
    _7
    // ...
    _7
    videoFrame.fillAlphaData(byteBuffer);
    _7
    return false;
    _7
    }

Reference

This section contains content that completes the information on this page, or points you to documentation that explains other aspects to this product.

Interactive Live Streaming