Alpha transparency effect
Portrait segmentation involves separating the broadcaster from their background, to facilitate dynamic background changes and effects in video streams. In various real-time audio and video interaction scenarios, applying portrait segmentation makes interactions more engaging, enhances immersion, and improves the overall experience.
Consider the following sample scenarios:
-
Broadcaster background replacement: The audience sees the broadcaster's background in the video replaced with a virtual scene, such as a gaming environment, a conference, or a tourist attractions.
-
Animated virtual gifts: Display dynamic animations with a transparent background to avoid obscuring live content when multiple video streams are merged.
-
Chroma keying during live game streaming: The audience sees the broadcaster's image cropped and positioned within the local game screen, making it appear as though the broadcaster is part of the game.
Prerequisites
Ensure that you have implemented the SDK quickstart in your project.
Implement Alpha transparency
Choose one of the following methods to implement the Alpha transparency effect based on your specific business scenario.
Custom video capture scenario
The implementation process for this scenario is illustrated in the figure below:
Take the following steps to implement this logic:
-
Process the captured video frames and generate Alpha data. You can choose from the following methods:
-
Method 1: Call the
pushExternalVideoFrame
[2/2] method and set thealphaBuffer
parameter to specify Alpha channel data for the video frames. This data matches the size of the video frames, with each pixel value ranging from 0 to 255, where 0 represents the background and 255 represents the foreground. -
Method 2: Call the
pushExternalVideoFrame
method and use thesetAlphaStitchMode
method in theVideoFrame
class to set the Alpha stitching mode. Construct aVideoFrame
with the stitched Alpha data.
-
-
Render the view and implement the Alpha transparency effect.
-
Call the
setupLocalVideo
method to set up the local view and set theenableAlphaMask
parameter totrue
to enable Alpha mask rendering. -
Call the
setupRemoteVideo
method to set the view for displaying the remote video stream locally, and set theenableAlphaMask
parameter totrue
to enable Alpha mask rendering.
-
SDK Capture Scenario
The implementation process for this scenario is illustrated in the following figure:
Take the following steps to implement this logic:
-
On the broadcasting end, call the
enableVirtualBackground
[2/2] method to enable the background segmentation algorithm and obtain the Alpha data for the portrait area. Set the parameters as follows:enabled
: Set totrue
to enable the virtual background.backgroundSourceType
: Set toBACKGROUND_NONE
(0), to segment the portrait and background, and process the background as Alpha data.
-
Render the view and implement the Alpha transparency effect. See the steps in the Custom Video Capture Scenario for details.
Raw video data scenario
The implementation process for this scenario is illustrated in the following figure:
Take the following steps to implement this logic:
-
Call the
registerVideoFrameObserver
method to register a raw video frame observer and implement the corresponding callbacks as required. -
Use the
onCaptureVideoFrame
callback to obtain the captured video data and pre-process it as needed. You can modify the Alpha data or directly add Alpha data. -
Use the
onPreEncodeVideoFrame
callback to obtain the local video data before encoding, and modify or directly add Alpha data as needed. -
Use the
onRenderVideoFrame
callback to obtain the remote video data before rendering it locally. Modify the Alpha data, add Alpha data directly, or render the video image yourself based on the obtained Alpha data.
Reference
This section contains content that completes the information on this page, or points you to documentation that explains other aspects to this product.