Skip to main content

You are viewing Agora Docs forBetaproducts and features. Switch to Docs

Android
iOS
Web
macOS
Windows
Flutter
React Native

MetaKit

The MetaKit extension is an innovative product designed to enhance interactive video experiences. By integrating multiple advanced AI technologies, it provides users with creative and personalized video enhancement functions.

MetaKit can add rich video interactive effects, allowing you to choose flexibly, according to your specific requirements:

  • Social entertainment: Enhance social entertainment and live broadcasts with features like Animoji and portrait edge flames, providing more creativity and personalization for hosts.
  • Online education: Create a more vivid and engaging teaching environment with 360-degree backgrounds to enhance students' interest in learning.
  • Online conferences: Use 3D lighting to create a presentation environment comparable to professional effects, enhancing the visual impact of your presentations.

Understand the tech

The MetaKit extension includes the following key functions:

FunctionDescription
Virtual humanEasily generate virtual characters and create unique virtual images with custom options like face pinching and fashion dressup. Capture user expressions in real time and render them back on the virtual image to enhance interaction.
AnimojiApply various Animoji effects to portraits in real time using AR and face capture technology. Show real-time changes in head dynamics and expressions to display a unique personality.
LightingProvide users with precise and efficient light and shadow effects, including 3D light (one light with customizable motion trajectory), atmosphere light (simulating multiple real light effects with fixed motion trajectory), advertising light, and other modes. Intelligent light and shadow control allows users to experience more realistic effects in a virtual environment.
Atmospheric effectsCreate an artistic atmosphere using lighting effects, including portrait edge flames, aurora, ripples, and other modes.
360 BackgroundProvide users with customized panoramic virtual background effects.
info

The MetaKit extension offers an open art ecosystem, supporting one-click import of Animoji and avatar images created according to Agora art standards. This provides users with more flexible creation and integration options.

To use this feature, contact technical support.

The effects of some functions are as follows:

This page explains how to integrate MetaKit extension into your project to utilize the virtual human, Animoji, lighting effects, and 360 background functions.

Prerequisites

To follow this procedure, you must have:

  • Integrated the v4.2.x or v4.3.x of the Video SDK and implemented basic real-time audio and video functions in your app. See SDK quickstart.
    info
    • When integrating through Maven Central, specify io.agora.rtc:full-sdk:x.y.z and replace x.y.z with the specific SDK version number.
    • The MetaKit extension uses the Face Capture extension ( libagora_face_capture_extension.so) and the Virtual Background extension (libagora_segmentation_extension.so). You can delete unnecessary extensions as needed to reduce the size of the app.
  • Android Studio v4.2 or above.

  • An Android device model produced in 2019 or later, to ensure that the front camera and microphone are functioning properly.

  • A computer that can access the Internet. If your network environment has a firewall deployed, refer to Firewall requirements to use the Agora services normally.

Project setup

To implement MetaKit effects in your app, open the SDK quickstart for Video Calling project and take the steps described below.

Integrate the extension

To integrate the MetaKit extension, take the following steps:

  1. Download and unzip the MetaKit Android extension.

  2. Open the folder and copy the /sdk files in the path to the corresponding project path.

    LibraryFunctionIntegration path
    AgoraMetaKit.aarRendering runtime layer/app/libs
    metakit.jarWrapper layer Java package/app/libs
    libagora_metakit_extension.soWrapper layer/app/src/main/jniLibs/arm64-v8a or /app/src/main/jniLibs/armeabi-v7a
  3. In the project's /app directory, add dependencies for all .jar and .aar files located under the libs path in the dependencies section of the build.gradle file.


    _1
    implementation fileTree(dir: 'libs', include: ['*.jar', '*.aar'])

Configure MetaKit

To configure the extension, take the following steps:

  1. Open the folder of the MetaKit extension for Android. The /assets/DefaultPackage path contains the Bundle file resources required for different scenes and functions. The table below lists the resource name, purpose, and size:

    NameRequired/OptionalUsageSize
    BaseRequiredBasic scene resources. Each functional module is built on this scene resource and includes related resources that support the hot update function.2.38 MB
    AvatarFunction-specificVirtual human model subpackage resources, including virtual human images such as girl and huamulan. Supports face capture, face pinching, and dress-up capabilities.girl: 14.8 MB
    huamulan: 3.2 MB (does not support face pinching and dress-up)
    AvatarAnimojiFunction-specificAnimoji model subpackage resources, including Animoji images such as dog, girlhead, and arkit. Supports face capture.dog: 1.4 MB
    girlhead: 954 KB
    arkit: 44 KB
    AREffectFunction-specificLighting effects and 360 background subpackage resources, including 3D lighting, atmosphere lighting, advertising lighting, screen ripples, aurora effects, portrait edge flames, and other effects.3.97 MB
  2. Combine the basic resources (Base) and the subpackage resources (Avatar, AvatarAnimoji, and AREffect) of specific functional modules into a complete resource package to experience the corresponding functional module. The functional modules and their corresponding resource package combinations are shown in the following table:

    Functional moduleResource package combination
    Virtual humanBase + Avatar
    AnimojiBase + AvatarAnimoji
    Lighting effectsBase + AREffect
    360 BackgroundBase + AREffect
  3. To experience the virtual human and 360 background features, combine the Base, Avatar, and AREffect resources into a single directory, as shown below. After preparing the resource directory, place it in the SD card directory of the mobile device, such as /sdcard/metaAssets/15. When loading scene resources, set the absolute path of the resource directory to MetaKit.

    Step 3

Handle Android permissions

To request the required permissions, take the following steps:

  1. Navigate to the project's /app/src/main directory and add the following permissions to the AndroidManifest.xml file:


    _15
    <!-- Required Permissions -->
    _15
    <uses-permission android:name="android.permission.INTERNET"/>
    _15
    _15
    <!-- Optional Permissions -->
    _15
    <uses-permission android:name="android.permission.CAMERA"/>
    _15
    <uses-permission android:name="android.permission.RECORD_AUDIO"/>
    _15
    <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>
    _15
    <uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/>
    _15
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
    _15
    <uses-permission android:name="android.permission.BLUETOOTH"/>
    _15
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
    _15
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
    _15
    <!-- For Android 12.0 and above, also add the following permissions -->
    _15
    <uses-permission android:name="android.permission.READ_PHONE_STATE"/>
    _15
    <uses-permission android:name="android.permission.BLUETOOTH_SCAN"/>

    The MetaKit extension primarily uses the following Android system permissions:

    | Permissions | Function | Description | |:-------------------------|:----------|:------| | CAMERA | Access your phone's camera. | Functions such as expression driving and background segmentation require access to the camera for AI reasoning.| | INTERNET | Access the network. | Authorize the AI module when the extension is enabled.| | READ_EXTERNAL_STORAGE | Read external storage. | Read the Bundle resource file from the SD card.| | WRITE_EXTERNAL_STORAGE | Write to external storage. | Record SDK-related log files. |

  2. Android 6.0 and later versions enforce stricter permission management. Besides declaring permissions statically in AndroidManifest.xml, certain permissions must also be requested dynamically within the application's business logic. Here's an example of how this can be implemented:


    _32
    // Obtain the necessary permissions for real-time audio-video interaction
    _32
    private String[] getRequiredPermissions(){
    _32
    if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.S) {
    _32
    // Permissions required for Android 12 (S) and above
    _32
    return new String[]{
    _32
    Manifest.permission.RECORD_AUDIO, // Audio recording permission
    _32
    Manifest.permission.CAMERA, // Camera permission
    _32
    Manifest.permission.READ_PHONE_STATE, // Read phone state permission
    _32
    Manifest.permission.READ_EXTERNAL_STORAGE, // Read external storage permission
    _32
    Manifest.permission.WRITE_EXTERNAL_STORAGE // Write external storage permission
    _32
    };
    _32
    } else {
    _32
    // Permissions required for Android 11 (R) and below
    _32
    return new String[]{
    _32
    Manifest.permission.RECORD_AUDIO,
    _32
    Manifest.permission.CAMERA,
    _32
    Manifest.permission.READ_EXTERNAL_STORAGE,
    _32
    Manifest.permission.WRITE_EXTERNAL_STORAGE
    _32
    };
    _32
    }
    _32
    }
    _32
    _32
    // Check if the app has obtained all required permissions
    _32
    private boolean checkPermissions() {
    _32
    for (String permission : getRequiredPermissions()) {
    _32
    int permissionCheck = ContextCompat.checkSelfPermission(this, permission);
    _32
    if (permissionCheck != PackageManager.PERMISSION_GRANTED) {
    _32
    return false;
    _32
    }
    _32
    }
    _32
    return true;
    _32
    }

Select architecture

The MetaKit extension currently supports the arm64-v8a and armeabi-v7a architectures. To optimize the app size, it's advisable to select only the necessary architecture during integration. Here's an example of how this can be implemented:


_3
ndk {
_3
abiFilters "arm64-v8a"
_3
}

Implement the logic

Once the project configuration is complete, follow these steps to explore the various functional modules of the MetaKit extension:

Listen to extension events

When calling createInitialize on RtcEngine, ensure the following configurations are performed in RtcEngineConfig:

  1. Call addExtension with AgoraFaceCapturePlugin (agora_face_capture_extension) and MetaKitPlugin (agora_metakit_extension). Then, implement the event callback interface IMediaExtensionObserver for extensions and register it for onEvent extension event callbacks.


    _30
    // Configure RtcEngineConfig
    _30
    RtcEngineConfig config = new RtcEngineConfig();
    _30
    config.mContext = getBaseContext();
    _30
    config.mAppId = appId;
    _30
    config.mEventHandler = mRtcEventHandler;
    _30
    _30
    // Add Face Capture extension
    _30
    config.addExtension("agora_face_capture_extension");
    _30
    _30
    // Add MetaKit extension
    _30
    config.addExtension("agora_metakit_extension");
    _30
    _30
    // Create the event callback interface class for extensions and register callbacks for extension events such as onEvent
    _30
    config.mExtensionObserver = new IMediaExtensionObserver() {
    _30
    public void onEvent(String provider, String extension, String key, String value) {
    _30
    // Implementation of onEvent callback
    _30
    }
    _30
    public void onStarted(String provider, String extension) {
    _30
    // Implementation of onStarted callback
    _30
    }
    _30
    public void onStopped(String provider, String extension) {
    _30
    // Implementation of onStopped callback
    _30
    }
    _30
    public void onError(String provider, String extension, int error, String message) {
    _30
    // Implementation of onError callback
    _30
    }
    _30
    };
    _30
    _30
    // Create and initialize RtcEngine
    _30
    mRtcEngine = RtcEngine.create(config);

  2. In the callback, specify provider as agora_video_filters_metakit and extension as metakit to filter events from the MetaKit extension. The onEvent event transmits engine status events transparently, such as unityLoadFinish (Unity environment loading completed) and loadSceneResp (scene resource loading completed).


    _64
    public void onEvent(String provider, String ext, String key, String msg) {
    _64
    // Filter events from the MetaKit extension
    _64
    if (!provider.equals("agora_video_filters_metakit") || !ext.equals("metakit")) return;
    _64
    _64
    // Log event details
    _64
    Log.i(TAG, "metakitx onEvent: " + key + ", msg: " + msg);
    _64
    _64
    // Handle different event keys
    _64
    switch(key) {
    _64
    case "initializeFinish":
    _64
    runningState = IMetaRunningState.initialized;
    _64
    break;
    _64
    // Unity environment loaded
    _64
    case "unityLoadFinish":
    _64
    runningState = IMetaRunningState.unityLoaded;
    _64
    Log.d(TAG, "metakitx to enter scene");
    _64
    enterScene();
    _64
    break;
    _64
    // Scene resource loaded
    _64
    case "loadSceneResp":
    _64
    Log.d(TAG,"metakitx receive loadSceneResp");
    _64
    runningState = IMetaRunningState.sceneLoaded;
    _64
    setMetaFeatureMode(curFeatrueType);
    _64
    break;
    _64
    case "addSceneViewResp":
    _64
    runningState = IMetaRunningState.sceneViewLoaded;
    _64
    // If special effects are set, configure background and effects
    _64
    if (setSpecialEffect) {
    _64
    setMetaBGMode(BackgroundType.BGTypePano);
    _64
    configMetaBackgroundEffectMode(curSpecialEffectType, true);
    _64
    }
    _64
    break;
    _64
    case "unloadSceneResp":
    _64
    runningState = IMetaRunningState.sceneUnloaded;
    _64
    // Perform scene cleanup if necessary
    _64
    //destroyScene();
    _64
    break;
    _64
    }
    _64
    isSyncing = false;
    _64
    }
    _64
    _64
    public void onError(String provider, String ext, int key, String msg) {
    _64
    // Filter errors from the MetaKit extension
    _64
    if (!provider.equals("agora_video_filters_metakit") || !ext.equals("metakit")) return;
    _64
    _64
    // Log error details
    _64
    Log.i("[MetaKit]", "onError: " + key + ", msg: " + msg);
    _64
    }
    _64
    _64
    public void onStart(String provider, String ext) {
    _64
    // Filter start events from the MetaKit extension
    _64
    if (!provider.equals("agora_video_filters_metakit") || !ext.equals("metakit")) return;
    _64
    _64
    // Log start event
    _64
    Log.i("[MetaKit]", "onStart");
    _64
    }
    _64
    _64
    public void onStop(String provider, String ext) {
    _64
    // Filter stop events from the MetaKit extension
    _64
    if (!provider.equals("agora_video_filters_metakit") || !ext.equals("metakit")) return;
    _64
    _64
    // Log stop event
    _64
    Log.i("[MetaKit]", "onStop");
    _64
    }

Enable extensions

Before enabling the MetaKit extension, ensure that both the Facial Capture extension and the Virtual Background extension are enabled.

Enable the Face Capture extension

To enable the Face Capture extension, follow these steps:

  1. Call registerExtension and enableExtension with the provider name agora_video_filters_face_capture and the extension name face_capture.


    _5
    // Register the facial capture extension
    _5
    mRtcEngine.registerExtension("agora_video_filters_face_capture", "face_capture", Constants.MediaSourceType.PRIMARY_CAMERA_SOURCE);
    _5
    _5
    // Enable the facial capture extension
    _5
    mRtcEngine.enableExtension("agora_video_filters_face_capture", "face_capture", true);

  2. Call setExtensionProperty to authenticate and authorize the extension. Use authentication_information as the key, and a value containing the company name (company_id) and the face capture certificate (license).


    _4
    mRtcEngine.setExtensionProperty("agora_video_filters_face_capture","face_capture", "authentication_information",
    _4
    "{\"company_id\":\"agoraDemo\"," +
    _4
    "\"license\":\"" +
    _4
    "xxxxxxxxxx"}", Constants.MediaSourceType.PRIMARY_CAMERA_SOURCE);

    Info

    Contact Agora to obtain the company name and certificate.

Enable the Virtual Background extension

To enable the Virtual Background extension, take the following steps:

  1. Call setParameters to set "rtc.video.seg_before_exts" to true:


    _1
    mRtcEngine.setParameters("{\"rtc.video.seg_before_exts\":true}");

  2. Call enableVirtualBackground with the following configurations:

    • Set backgroundSourceType to 0 to process the background into alpha information, separating the portrait from the background.
    • Set modelType to 1 to select background processing suitable for all scenes.

    _14
    VirtualBackgroundSource source = new VirtualBackgroundSource();
    _14
    // Set backgroundSourceType to 0 to process the background into alpha information, separating the portrait from the background
    _14
    source.backgroundSourceType = 0;
    _14
    source.color = 0xFFFFFF;
    _14
    source.source = "";
    _14
    source.blurDegree = 1;
    _14
    _14
    SegmentationProperty param = new SegmentationProperty();
    _14
    // Set modelType to 1 to select background processing suitable for all scenes
    _14
    param.modelType = 1;
    _14
    param.greenCapacity = 0.5f;
    _14
    _14
    // Enable the Virtual Background extension
    _14
    mRtcEngine.enableVirtualBackground(true, source, param, Constants.MediaSourceType.PRIMARY_CAMERA_SOURCE);

Enable the MetaKit extension

To enable the MetaKit extension, follow these steps:

  1. Call registerExtension with the service provider name agora_video_filters_metakit and the extension name metakit.


    _1
    mRtcEngine.registerExtension("agora_video_filters_metakit", "metakit", Constants.MediaSourceType.PRIMARY_CAMERA_SOURCE);

  2. Call enableExtension with the same service provider name and extension name.


    _1
    mRtcEngine.enableExtension("agora_video_filters_metakit", "metakit", true);

Initialize MetaKit

  1. To set the Android activity context for starting the rendering engine, call setExtensionProperty with the following parameters:

    • key: setActivityContext
    • value: The activity context address

    _10
    Activity mActivity;
    _10
    JSONObject valueObj = new JSONObject();
    _10
    try {
    _10
    long address = getContextHandler(mActivity);
    _10
    valueObj.put("activityContext", String.valueOf(address));
    _10
    } catch (JSONException e) {
    _10
    e.printStackTrace();
    _10
    }
    _10
    _10
    mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "setActivityContext", valueObj.toString());

  2. To initialize the MetaKit extension, call setExtensionProperty with the following parameters:

    • key: initialize
    • value: an empty string

    _1
    mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "initialize","{}");

Load scene resources

  1. When the onEvent callback captures the unityLoadFinish event, it indicates that the environment has been loaded. At this point, you can call setExtensionProperty to load the MetaKit scene resources. Use the following parameters:

    • key: loadScene
    • value: A string containing relevant information about the scene resources

    _27
    JSONObject valueObj = new JSONObject();
    _27
    try {
    _27
    JSONObject sceneObj = new JSONObject();
    _27
    // highlight-start
    _27
    // Set the path of the scene resources on the phone
    _27
    // Assume the resources are stored at /first/second/DefaultPackage/ on the phone; only /first/second needs to be specified in scenePath
    _27
    sceneObj.put("scenePath", "/sdcard/metaAssets/15");
    _27
    // highlight-end
    _27
    _27
    JSONObject customObj = new JSONObject();
    _27
    // highlight-start
    _27
    // Set the scene index to 0
    _27
    customObj.put("sceneIndex", 0);
    _27
    // highlight-end
    _27
    _27
    valueObj.put("sceneInfo", sceneObj);
    _27
    valueObj.put("assetManifest", "");
    _27
    valueObj.put("userId", "123456");
    _27
    valueObj.put("extraCustomInfo", customObj.toString());
    _27
    } catch (JSONException e) {
    _27
    e.printStackTrace();
    _27
    }
    _27
    _27
    // highlight-start
    _27
    // Load scene resources based on the JSON configuration
    _27
    mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "loadScene", valueObj.toString());
    _27
    // highlight-end

    1. When the onEvent callback captures the loadSceneResp event, it indicates that the scene resources have been loaded. You can then follow these steps to experience the virtual human, Animoji, lighting effects, and 360 background modules.


      _16
      JSONObject valueObj = new JSONObject();
      _16
      try {
      _16
      JSONObject configObj = new JSONObject();
      _16
      // highlight-start
      _16
      configObj.put("key", "bsname"); // The key is the resource ID of the face pinching part
      _16
      configObj.put("value", 30); // The value is the corresponding intensity of the face pinching, ranging from [0,100], with a default of 50
      _16
      // highlight-end
      _16
      valueObj.put("value", configObj);
      _16
      } catch (JSONException e) {
      _16
      e.printStackTrace();
      _16
      }
      _16
      _16
      // highlight-start
      _16
      // Perform face pinching operation based on JSON configuration
      _16
      mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "updateFace", valueObj.toString());
      _16
      // highlight-end

Use the avatar effect

  1. Call setExtensionProperty to request texture and render the virtual human scene. Set key to requestTexture and value to include the scene configuration information. To experience the virtual human feature, set avatarMode to 0 for the virtual human scene mode and specify the avatar as your desired virtual human image, such as girl or huamulan.

    Note

    In addition to the default avatars, girl and huamulan, the Agora MetaKit extension offers an open artistic ecosystem. It supports one-click import of virtual human models created according to Agora's art standards, providing users with more flexible creation and integration options. Contact Agora technical support to use this feature.


    _30
    JSONObject valueObj = new JSONObject();
    _30
    try {
    _30
    // highlight-start
    _30
    valueObj.put("index", 0); // Texture index, currently only supports 0
    _30
    valueObj.put("enable", true); // Enable texture request
    _30
    // highlight-end
    _30
    _30
    JSONObject configObj = new JSONObject();
    _30
    configObj.put("width", 640);
    _30
    configObj.put("height", 480);
    _30
    _30
    JSONObject extraObj = new JSONObject();
    _30
    // highlight-start
    _30
    extraObj.put("sceneIndex", 0); // Scene index, currently only supports 0
    _30
    extraObj.put("avatarMode", 0); // Set scene mode to 0, which is virtual human mode
    _30
    extraObj.put("avatar", "huamulan"); // Set the virtual human image to "huamulan"
    _30
    // highlight-end
    _30
    extraObj.put("userId", "123");
    _30
    configObj.put("extraInfo", extraObj.toString());
    _30
    _30
    valueObj.put("config", configObj);
    _30
    _30
    } catch (JSONException e) {
    _30
    e.printStackTrace();
    _30
    }
    _30
    _30
    // highlight-start
    _30
    // Render the virtual human scene based on the JSON configuration
    _30
    mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "requestTexture", valueObj.toString());
    _30
    // highlight-end

    After the scene rendering is complete, a Blendshape-driven virtual human image will be displayed, capturing your facial expressions and making corresponding facial changes, following your head movements.

  2. Call setExtensionProperty to perform face pinching operations on the virtual human. Set key to updateFace and value to support passing multiple sets of resource IDs for face pinching parts and their corresponding adjustment ranges. See face pinching for details.


    _16
    JSONObject valueObj = new JSONObject();
    _16
    try {
    _16
    JSONObject configObj = new JSONObject();
    _16
    // highlight-start
    _16
    configObj.put("key", "bsname"); // The key is the resource ID of the face pinching part
    _16
    configObj.put("value", 30); // The value is the corresponding intensity of the face pinching, ranging from [0,100], with a default of 50
    _16
    // highlight-end
    _16
    valueObj.put("value", configObj);
    _16
    } catch (JSONException e) {
    _16
    e.printStackTrace();
    _16
    }
    _16
    _16
    // highlight-start
    _16
    // Perform face pinching operations based on the JSON configuration
    _16
    mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "updateFace", valueObj.toString());
    _16
    // highlight-end

  3. Call setExtensionProperty to perform dress-up operations on the virtual human. Set key to updateDress and value to support passing an array of integers containing multiple resource IDs for dressing parts. See dressing resources for details.


    _12
    JSONObject valueObj = new JSONObject();
    _12
    try {
    _12
    // highlight-start
    _12
    valueObj.put("id", "[10002]"); // Set the ID to an array of integers containing multiple resource IDs
    _12
    // highlight-end
    _12
    } catch (JSONException e) {
    _12
    e.printStackTrace();
    _12
    }
    _12
    // highlight-start
    _12
    // Perform dressing operations based on the JSON configuration
    _12
    mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "updateDress", valueObj.toString());
    _12
    // highlight-end

    Additionally, MetaKit supports switching the virtual human's appearance and perspective. For more details, refer to the virtual human key-value description.

Use the Animoji effect

Call setExtensionProperty to request the texture and render the Animoji scene. Set key to requestTexture, which includes the scene configuration information. To experience the Animoji function, set avatarMode to 1 for Animoji scene mode. Specify avatar to the Animoji image you want to use, such as dog, girl, or headarkit.

Info

In addition to the already available Animoji images (dog, girl, headarkit), the Agora MetaKit extension provides an open art ecosystem. It supports one-click import of Animoji images created according to Agora's art standards, offering users more flexible creation and integration options. Contact Agora technical support to use this feature.


_29
JSONObject valueObj = new JSONObject();
_29
try {
_29
// highlight-start
_29
valueObj.put("index", 0); // Texture index, currently only supports 0
_29
valueObj.put("enable", true); // Enable texture request
_29
// highlight-end
_29
_29
JSONObject configObj = new JSONObject();
_29
configObj.put("width", 640);
_29
configObj.put("height", 480);
_29
_29
JSONObject extraObj = new JSONObject();
_29
// highlight-start
_29
extraObj.put("sceneIndex", 0); // Scene index, currently only supports 0
_29
extraObj.put("avatarMode", 1); // Set scene mode to 1, which is Animoji mode
_29
extraObj.put("avatar", "dog"); // Set Animoji image to "dog"
_29
// highlight-end
_29
extraObj.put("userId", "123");
_29
configObj.put("extraInfo", extraObj.toString());
_29
_29
valueObj.put("config", configObj);
_29
} catch (JSONException e) {
_29
e.printStackTrace();
_29
}
_29
_29
// highlight-start
_29
// Render the Animoji scene based on the JSON configuration
_29
mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "requestTexture", valueObj.toString());
_29
// highlight-end

Use the sticker effect

Call setExtensionProperty to request the texture and render the sticker scene. Set key to loadMaterial and value to the material configuration. Specify the corresponding resource name depending on the sticker that you want to use. For example, material_sticker_glass for glasses.

Info

In addition to the already available stickers veil, glass, facemask, and dragonhat, the Agora MetaKit extension provides an open art ecosystem and supports one-click import of sticker images created according to Agora's art standards. This offers users more flexible creation and integration options. Contact Agora technical support to use this feature.


_10
long addressHandle = 0;
_10
_10
JSONObject valueObj = new JSONObject();
_10
try {
_10
valueObj.put("view", String.valueOf(addressHandle));
_10
valueObj.put("path", path_to_material_sticker_glass);
_10
} catch (JSONException e) {
_10
e.printStackTrace();
_10
}
_10
mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "loadMaterial", valueObj.toString());

When the onEvent callback captures the materialLoaded event, it means that the scene view has been added. At this time, a glasses sticker covering the eyes will be displayed in the view, following your head movements.

Apply lighting effects and 360 background

  1. Call setExtensionProperty to request the texture and render a scene with lighting effects and 360 background features. The key is requestTexture, and the value contains the configuration information of the scene. To experience lighting effects and the 360 background feature, set avatarMode to 2, which corresponds to lighting effects and 360 background mode.


    _28
    JSONObject valueObj = new JSONObject();
    _28
    try {
    _28
    // highlight-start
    _28
    valueObj.put("index", 0); // Texture index, currently only supports 0
    _28
    valueObj.put("enable", true); // Enable texture request
    _28
    // highlight-end
    _28
    _28
    JSONObject configObj = new JSONObject();
    _28
    configObj.put("width", 640);
    _28
    configObj.put("height", 480);
    _28
    _28
    JSONObject extraObj = new JSONObject();
    _28
    // highlight-start
    _28
    extraObj.put("sceneIndex", 0); // Scene index, currently only supports 0
    _28
    extraObj.put("avatarMode", 2); // Set scene mode to 2, which is lighting effects and 360 background mode
    _28
    // highlight-end
    _28
    extraObj.put("userId", "123");
    _28
    configObj.put("extraInfo", extraObj.toString());
    _28
    _28
    valueObj.put("config", configObj);
    _28
    } catch (JSONException e) {
    _28
    e.printStackTrace();
    _28
    }
    _28
    _28
    // highlight-start
    _28
    // Render the scene with lighting effects and 360 background based on the JSON configuration
    _28
    mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "requestTexture", valueObj.toString());
    _28
    // highlight-end

  2. Experience lighting effects and 360 background.

    1. Lighting effects:

      Call setExtensionProperty to set up lighting effects. The key is setEffectVideo, and the value contains a series of lighting materials and their corresponding parameter configurations. The MetaKit extension provides lighting effects such as 3D lighting, screen ripples, aurora effects, and portrait edge flames, and supports fine-tuning of parameters such as color, intensity, and range. See the Lighting effects key-value documentation for more details. The example code below demonstrates how to overlay advertising lights on a live video.


      _13
      JSONObject configObj = new JSONObject();
      _13
      try {
      _13
      // highlight-start
      _13
      configObj.put("id", 3002); // Specify the effect material ID as 3002, which is advertising lights
      _13
      configObj.put("enable", true); // Enable lighting effect
      _13
      // highlight-end
      _13
      } catch (JSONException e) {
      _13
      e.printStackTrace();
      _13
      }
      _13
      // highlight-start
      _13
      // Add advertising light effect based on the JSON configuration
      _13
      mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "setEffectVideo", configObj.toString());
      _13
      // highlight-end

    2. 360 background:

      Call setExtensionProperty to set up a 360 panoramic background. The key is setBGVideo, and the value sets the background mode, resource path, and rotation angle.


      _17
      JSONObject picObj = new JSONObject();
      _17
      try {
      _17
      // highlight-start
      _17
      picObj.put("mode", "tex360"); // Set background mode to 360 panoramic background mode
      _17
      // highlight-end
      _17
      JSONObject configObj = new JSONObject();
      _17
      // highlight-start
      _17
      configObj.put("path", "/sdcard/metaFiles/bg_pano.jpg"); // Specify the file path of the background resource
      _17
      // highlight-end
      _17
      picObj.put("param", configObj);
      _17
      } catch (JSONException e) {
      _17
      e.printStackTrace();
      _17
      }
      _17
      // highlight-start
      _17
      // Add 360 background based on the JSON configuration
      _17
      mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "setBGVideo", picObj.toString());
      _17
      // highlight-end

      You can also call setExtensionProperty to enable the gyroscope, specify key as setCameraGyro, and enable the gyroscope function in the value to further enhance the interactivity and immersion of the background.


      _12
      JSONObject gyroObj = new JSONObject();
      _12
      try {
      _12
      // highlight-start
      _12
      gyroObj.put("state", "on"); // Enable gyroscope function
      _12
      // highlight-end
      _12
      } catch (JSONException e) {
      _12
      e.printStackTrace();
      _12
      }
      _12
      // highlight-start
      _12
      // Enable gyroscope function based on the JSON configuration
      _12
      mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "setCameraGyro", gyroObj.toString());
      _12
      // highlight-end

      After successfully setting this effect, you can see that the video background is replaced with the specified resource, and you can experience the panoramic effect by rotating the phone. For more configurations, see the 360 Background key-value documentation.

Release resources

When you are done using the extension, you can follow the sample code below to stop texture requests, unload scene resources, and destroy the engine.


_16
// 1. Stop texture requests
_16
JSONObject valueObj = new JSONObject();
_16
try {
_16
valueObj.put("index", 0); // Texture index, currently only supports setting to 0
_16
valueObj.put("enable", false); // Set enable to false to stop the texture request feature
_16
} catch (JSONException e) {
_16
e.printStackTrace();
_16
}
_16
_16
mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "requestTexture", valueObj.toString());
_16
_16
// 2. Unload scene resources
_16
mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "unloadScene", "{}");
_16
_16
// 3. Destroy the engine
_16
mRtcEngine.setExtensionProperty("agora_video_filters_metakit", "metakit", "destroy", "{}");

Reference

This section completes the information on this page, or points you to documentation that explains other aspects about this product.

Key-value description

To implement the capabilities of the MetaKit extension, use the setExtensionProperty method provided by the Agora Video SDK v4.x. Pass in the specified key and value as follows:

  • key: Corresponds to different interfaces of the MetaKit extension.
  • value: Encapsulates some or all of the interface parameters in the JSON format.

This guide explains how to use different key-value pairs to implement the MetaKit extension's virtual human, Animoji, lighting effects, and 360 background function modules.

Basic functions

This section covers how to implement the basic functions of the MetaKit extension, such as initialization, loading scene resources, enabling texture requests, switching texture scene modes, and avatars. Once you have implemented the basic functions, you can explore the specific functional modules.

Set up the Android Activity Context
  • key: setActivityContext
  • value: Object. Contains the following field:
    • activityContext: String. The address of the activity context.
Initialize the engine
  • key: initialize
  • value: {}
Load scene resources
  • key: loadScene
  • value: Object. Contains the following fields:
    • sceneInfo: Object. Contains the following field:
      • scenePath: String. The path of the scene asset package, for example, "/sdcard/metaAssets/15".
    • extraCustomInfo: Object. Contains the following field:
      • sceneIndex: Int. The index of the scene, currently only supports 0.
Enable texture request

Request a texture and render the specified scene content on the texture. This includes virtual humans, Animoji, lighting effects, and 360 backgrounds.

  • key: requestTexture
  • value: Object. Contains the following fields:
    • index: Int. Texture index, currently only supports 0.
    • enable: Boolean. Whether to enable the texture request. true: Enable; false: Disable (default).
    • config: Object. Contains the following fields:
      • width: Int. The width of the view (px). Set this to the current camera acquisition resolution, the width and height of the screen layout, or a common resolution like 720 × 1280.
      • height: Int. The height of the view (px). Set this to the current camera acquisition resolution, the width and height of the screen layout, or a common resolution like 720 × 1280.
      • extraInfo: Object. Contains the following fields:
        • sceneIndex: (optional) Int. Scene index, currently only supports 0.
        • avatarMode: (optional) Int. Scene mode. 0: Avatar (default); 1: Animoji; 2: Light or background.
        • avatar: (optional) String. Avatar or Animoji image. If avatarMode is 0 (avatar), set to girl or huamulan (default is girl); if avatarMode is 1 (Animoji), set to dog, girlhead, or arkit (default is dog).
Note

The requestTexture and addSceneView methods can both be used to render a specified scene on TextureView. Agora recommends using requestTexture for better rendering performance and lower latency. The differences are as follows:

  • requestTexture does not require passing the render target TextureView to the MetaKit extension; it automatically generates and sends back texture data. addSceneView requires manual creation and management of TextureView.
  • With requestTexture, the obtained texture is directly rendered, previewed, encoded by the SDK, and transmitted to the remote end. addSceneView requires an additional call to enableSceneVideo to enable scene screen capture.
  • requestTexture supports a single view; addSceneView supports multiple views.
  • For scene mode or avatar switching, use requestTexture to request textures and switchTextureAvatarMode for scene switching. Use addSceneView to add scene views and switchAvatarMode to complete scene switching.
  • To release scene resources, use requestTexture and set enable to false to stop texture requests. If you added a scene view using addSceneView, use removeSceneView to remove it.
Switch texture scene

After enabling texture requests, switch the scene mode of the texture view, or the virtual human or Animoji image in the scene.

  • key: switchTextureAvatarMode
  • value: Object. Contains the following fields:
    • index: Int. Texture index, currently only supports 0.
    • mode: (optional) Int. Scene mode to switch to. 0: Avatar; 1: Animoji; 2: Video capture screen.
    • avatar: (optional) String. Avatar or Animoji to switch to. If avatarMode is 0 (avatar), set to girl or huamulan; if avatarMode is 1 (Animoji), set to dog, girlhead, or arkit.
Add scene view

Add a MetaKit scene to a native view and render the specified scene content. This includes virtual human, Animoji, lighting effects, and 360 background.

Note
  • Supports adding up to 8 scene views.
  • Currently, only lighting and background effects for video capture are supported. To enable backgroundEffect, avatarMode must be set to 2.
  • key: addSceneView
  • value: Object. Contains the following fields:
    • view: Int64. The address handle of the view.
    • config: Object. Contains the following fields:
      • width: (optional) Int. The width of the view (px). Defaults to full screen if not specified.
      • height: (optional) Int. The height of the view (px). Defaults to full screen if not specified.
      • extraInfo: Object. Contains the following fields:
        • sceneIndex: Int. Scene index, currently only supports 0.
        • avatarMode: (optional) Int. Scene mode. 0: (default) Avatar; 1: Animoji; 2: Video capture screen.
        • avatar: (optional) String. Avatar or Animoji image. If avatarMode is 0 (avatar), set to girl or huamulan (default is girl). If avatarMode is 1 (Animoji), set to dog, girlhead, or arkit (default is dog).
        • backgroundEffect: (optional) Boolean. Enables lighting effects and 360 background functions. true: Enable; false: (default) Disable.
Switch scene view

After adding a scene view, you can switch the scene mode, or the virtual human or Animoji image in the scene.

  • key: switchAvatarMode
  • value: Object. Contains the following fields:
    • viewAddress: Int64. The address handle of the view.
    • mode: (optional) Int. Specifies the scene mode to switch to. 0: avatar; 1: Animoji; 2: Video capture screen.
    • avatar: (optional) String. Specifies the avatar or Animoji to switch to. If avatarMode is 0 (avatar), set to girl or huamulan. If avatarMode is 1 (Animoji), set to dog, girlhead, or arkit.
Enable scene view capture

After enabling scene view capture, call joinChannel to join the channel and publish the video stream of the scene view.

  • key: enableSceneVideo
  • value: Object. Contains the following fields:
    • view: Int64. The address handle of the view.
    • enable: (optional) Boolean. Enables scene view capture. true: Enable; false: (default) Disable.
Remove scene view

Remove the MetaKit scene view from view.

  • key: removeSceneView
  • value: Object. Contains the following field:
    • view: Int64. The address handle of the view.
Unload scene resources
  • key: unloadScene
  • value: {}
Destroy engine
  • key: destroy
  • value: {}

Virtual human

The MetaKit extension allows you to switch the image, viewpoint, face, and outfit of the avatar. To experience the avatar-related functions, set avatarMode to 0 when enabling texture request or adding scene view.

Note

In addition to the existing girl and huamulan avatars, the Agora MetaKit extension provides an open art ecosystem and supports one-click import of avatar models made according to Agora's art standards, providing users with more flexible creation and integration options. Contact Agora technical support to use this feature.

Switch virtual human perspective
  • key: setCamera
  • value: Object. Contains the following field:
    • viewMode: Int. The avatar camera view. 0: Show the avatar's full body; 1: (default) Focus on the avatar's upper body; 2: Focus on the avatar's face.
Virtual human face pinching

The MetaKit extension provides a set of face-pinching resources for virtual images.

Note

Currently only the girl avatar supports face pinching.

  • key: updateFace

  • value: Object. Contains the following fields:

    • key: String. Resource ID, such as MC_updown_1 (upward bend of the mouth corner) and MC_updown_2 (downward bend of the mouth corner). See Face pinching resources for details.
    • value: Int. Adjustment range, range is [0, 100], default value is 50. Supports passing in multiple sets of face-pinching resource IDs (key) and corresponding adjustment ranges (value) to achieve the final face-pinching effect. The example of setting MC_updown_1 and MC_updown_2 to 100 respectively is as follows:

    Mouse down

Avatar dressup

The MetaKit extension provides a set of dress-up resources for avatars.

Note

Currently only the girl avatar supports dressup.

  • key: updateDress

  • value: Object. Contains the following field:

    • id: Int[]. An Int array consisting of resource IDs of multiple clothing items or body parts. Supports dressing operations on multiple items or parts, such as hair, tops, jackets, pants, and so on. Each part provides multiple dressing resources to choose from, that is, each part corresponds to multiple dressing resource IDs. Only one resource can be specified for each part at a time. See Dress-up resources for details.

    The recommended set combinations are as follows:

    1. Set 1


      _2
      // The following resource IDs correspond to the following clothing items/body parts [hair, eyebrows, blush, headdress, top coat, pants, shoes]
      _2
      "id": [10001, 10101, 10401, 10801, 12101, 14101, 15001]

      Avatar dressup tab1
    2. Set 2


      _2
      // The following resource IDs correspond to the following clothing items/body parts [hair, eyebrows, blush, coat, gloves, pants, shoes]
      _2
      "id": [10002, 10102, 10402, 12102, 12501, 14102, 15002]

      Avatar dressup tab2

Animoji

The MetaKit extension allows you to switch the image of Animoji. To experience Animoji-related functions, set avatarMode to 1 when enabling texture request or adding scene view.

Note

In addition to the existing dog, girlhead and arkit Animoji, the Agora MetaKit extension provides an open art ecosystem and supports one-click import of Animoji images made according to Agora's art standards, providing users with more flexible creation and integration options. Contact Agora technical support to use this feature.

Adjust rendering level

The MetaKit extension provides three rendering levels: Low, medium, and high. You can choose the corresponding rendering level according to the device performance to achieve the best match between device performance and rendering effect.

Note

Currently, only the dog Animoji image supports adjusting the rendering level.

  • key: setRenderQuality
  • value: Object. Contains the following field:
    • general: Int. 0: Low configuration; 1: (default) Medium configuration; 2: High configuration.

Lighting effects

The MetaKit extension provides lighting effects such as 3D lighting, screen ripples, aurora effects, and portrait edge flames, and supports fine-tuning the color, intensity, range, and other parameters of the lighting effects. To experience the lighting effects-related functions, set avatarMode to 2 when enabling texture requests or set backgroundEffect to true when adding a scene view.

Set special effect material
  • key: setEffectVideo
  • value: Object. Contains the following fields:
    • id: Int. Special effect material ID.
    • enable: Boolean. Whether to enable the special effect. true: Enable; false: Disable.
    • param: (optional) Object. Each special effect material ID corresponds to a set of configuration parameters, which allows you to fine-tune the color, intensity, range, and so on of the lighting effect. If you do not fill in the parameters, the default parameter configuration will be used.

The mapping relationship between special effect material ID and configuration parameters is as follows:

IDEffectParameters
10013D Lighting- color (Int64): Lighting color. When passing the parameter, the hexadecimal color needs to be converted to an Int64 value. For example, for red, the hexadecimal color is #FF0000, and the converted Int64 value is 16711680.
- intensity (Float): Light intensity. The recommended value range is [1.0, 2.0]. The default value is 1.6.
- scale (Float): Lighting scale. The recommended range is [0.3, 0.6]. The default value is 0.4.
1002Screen ripples- color (Int64): Ripple color. When passing parameters, the hexadecimal color needs to be converted to an Int64 value. For example, for red, the hexadecimal color is #FF0000, and the converted Int64 value is 16711680.
- speed (Float): Fluctuation speed. The recommended value range is [-0.2, 0.2]. The default value is -0.12.
- scale (Float): Ripple size. The recommended value range is [3.0, 6.0]. The default value is 4.0.
1003Aurora- color (Int64): Aurora color. When passing parameters, the hexadecimal color needs to be converted to an Int64 value. For example, for red, the hexadecimal color is #FF0000, and the converted Int64 value is 16711680.
- intensity (Float): Aurora intensity. The recommended value range is [0.8, 1.5]. The default value is 1.0.
2001Portrait edge flame- color (Int64): Flame color. When passing parameters, the hexadecimal color needs to be converted to an Int64 value. For example, for red, the hexadecimal color is #FF0000, and the converted Int64 value is 16711680.
- intensity (Float): Flame intensity. The recommended value range is [0.2, 1.5]. The default value is 0.2.
3001Ambient lighting setN/A
3002Advertising lights- startColor (Int64): The initial color of the advertising light. When passing parameters, the hexadecimal color needs to be converted to an Int64 value. For example, for red, the hexadecimal color is #FF0000, and the converted Int64 value is 16711680.
- endColor (Int64): The end color of the advertising light. When passing parameters, you need to convert the hexadecimal color into an Int64 value. After configuring the starting color, a gradient effect from the initial color to the ending color will be created.
- size (Float): The size of the advertisement light texture. The recommended value range is [8, 15]. The default value is 10.
- intensity (Float): Advertising light intensity. The recommended value range is [100, 1000], and the default value is 1000.
- range (Float): The distance of the advertising light. The recommended range is [10, 40]. The default value is 15.

360 Background

The MetaKit extension allows you to enable 360-degree panoramic background mode, customize background replacement resources, and enable the gyroscope function to enhance the interactivity and immersion of the scene background. To experience 360-degree background-related functions, set avatarMode to 2 when enabling texture request or set backgroundEffect to true when adding a scene view.

Set replacement resource

After successful setting, you can observe that the video background is replaced with the specified resource, and you can experience the panoramic effect by rotating the phone.

  • key: setBGVideo
  • value: Object. Contains the following fields:
    • mode: String. Set to tex360, which means 360-degree panoramic background.
    • param:
      • path: String. Specifies the URL or local path of the background resource.
      • rotation: (optional) Int. Rotation angle, default value is 0.
Enable background gyroscope

The gyroscope function is only supported after successfully setting up a 360-degree panoramic background. Enabling the gyroscope function can further enhance the interactivity and immersion of the background.

  • key: setCameraGyro
  • value: Object. Contains the following field:
    • state: Boolean. Background gyroscope function status. on: Enabled; off: (default) Disabled.

Face-pinching resources

This section introduces the virtual human face-pinching resources provided by the MetaKit extension.

Girl

This section introduces the face-shaping resources for girl.

Face

An example of lifting (CK_raise_1) and lowering (CK_raise_2) the cheeks is shown in the following video:

The girl resource supports face-pinching operations on the following parts of the face:

Resource IDLocation
FE_raise_1Forehead protrusion
FE_raise_2Forehead collapse
TP_raise_1Temple protrusion
TP_raise_2Temple collapse
CK_raise_1Cheek raise
CK_raise_2Cheek collapse
MD_width_1Mandible outward
MD_width_2Mandible inward
MD_updown_1Mandible up and down
MD_updown_2Mandible up
C_width_1Chin stretch (left and right)
C_width_2Chin tightening (left and right)
C_updown_1Chin stretch
C_updown_2Chin stretch
Eyebrow

The following are examples of adjusting the eyebrows to be longer (EB_length_1) and shorter (EB_length_2):

The girl resource supports face-pinching operations on the following parts of the eyebrows:

Resource IDLocation
EB_width_1Eyebrows moved inwards
EB_width_2Eyebrows moved outwards
EB_updown_1Eyebrows moved down
EB_updown_2Eyebrows moved up
EB_thicknessAdjust the thickness of eyebrows
EBIN_updown_1Inner eyebrow moved up
EBIN_updown_2Inner eyebrow moved down
EBMID_updown_1Middle eyebrow moved up
EBMID_updown_2Middle eyebrow moved down
EB_length_1Adjust eyebrow length
EB_length_2Adjust eyebrow length
EBOUT_updown_1Outer eyebrow high position
EBOUT_updown_2Outer eyebrow low position
Eye

The following are examples of adjusting the overall enlargement (E_size_1) and shrinking of the eyes (E_size_2):

The girl resource supports face-pinching operations on the following parts of the eyes:

Resource IDLocation
E_width_1Eyes inward
E_width_2Eyes outward
E_updown_1Eyes up adjustment
E_updown_2Eyes down adjustment
IC_width_1Inner corner of eye facing inward
IC_width_2Inner corner of eye facing outward
IC_updown_1Inner corner of eye upward
IC_updown_2Inner corner of eye downward
UEIN_updown_1Upper eyelid tip up
UEIN_updown_2Upper eyelid tip down
UE_updown_1Upper eyelid upwards
UE_updown_2Upper eyelid downwards
UEOUT_updown_1Upper eyelid ends upward
UEOUT_updown_2Upper eyelid ends downward
LE_updown_1Lower eyelid downwards
LE_updown_2Lower eyelid upwards
OC_width_1Outer corner of eye inward
OC_width_2Outer corner of eye outward
OC_updown_1Outer corner of eye upward
OC_updown_2Outer corner of eye downward
E_rotate_1Eye rotation 1
E_rotate_2Eye rotation 2
E_size_1Enlarge the entire eye
E_size_2Reduce the entire eye size
EL_updown_1Eyelids wider
EL_updown_2Eyelids narrower
Nose

The following are examples of adjusting the overall enlargement (NT_size_1) and shrinking (NT_size_2) of the nose tip:

The girl resource supports face-pinching operations on the following parts of the nose:

Resource IDLocation
N_width_1Enlarge the nose (left and right)
N_width_2Shrink the nose (left and right)
N_updown_1Nose up
N_updown_2Nose down
NB_raise_1Nose raised
NB_raise_2Nose bridge concave
NT_size_1Enlarge nose tip
NT_size_2Shrink nose tip
NW_width_1Nose wings outward
NW_width_2Nose wings inward
NW_updown_1Nose wings upward
NW_updown_2Nose wings downward
Mouth

The following are examples of adjusting the mouth to move down (M_updown_1) and up (M_updown_2):

The girl resource supports face-pinching operations on the following parts of the mouth:

Resource IDLocation
UL_width_1Wider upper lip
UL_width_2Narrower upper lip
LL_width_1Wider lower lip
LL_width_2Narrower lower lip
MC_updown_1Mouth corners curved upward
MC_updown_2Mouth corners curved downward
M_size_1Enlarge the mouth (left and right)
M_size_2Shrink the mouth (left and right)
M_updown_1Mouth downward
M_updown_2Mouth upward

JSON example

The complete face-shaping JSON is as follows:


_338
{
_338
"faceParameters": [
_338
{
_338
"avatar": "girl",
_338
"blendshape": [
_338
{
_338
"type": "Face",
_338
"shapes": [
_338
{
_338
"key": "FE_raise_1",
_338
"ch": "prominence of forehead"
_338
},
_338
{
_338
"key": "FE_raise_2",
_338
"ch": "forehead collapse"
_338
},
_338
{
_338
"key": "TP_raise_1",
_338
"ch": "prominence of the temple"
_338
},
_338
{
_338
"key": "TP_raise_2",
_338
"ch": "collapse of the temple"
_338
},
_338
{
_338
"key": "CK_raise_1",
_338
"ch": "prominence of cheek"
_338
},
_338
{
_338
"key": "CK_raise_2",
_338
"ch": "sunken cheek"
_338
},
_338
{
_338
"key": "MD_width_1",
_338
"ch": "mandible outward"
_338
},
_338
{
_338
"key": "MD_width_2",
_338
"ch": "mandible inward"
_338
},
_338
{
_338
"key": "MD_updown_1",
_338
"ch": "mandible down"
_338
},
_338
{
_338
"key": "MD_updown_2",
_338
"ch": "mandible up"
_338
},
_338
{
_338
"key": "C_width_1",
_338
"ch": "Stretch your jaw left and right"
_338
},
_338
{
_338
"key": "C_width_2",
_338
"ch": "chin tightening left and right"
_338
},
_338
{
_338
"key": "C_updown_1",
_338
"ch": "chin stretch"
_338
},
_338
{
_338
"key": "C_updown_2",
_338
"ch": "chin stretch"
_338
}
_338
]
_338
},
_338
{
_338
"type": "Eyebrow",
_338
"shapes": [
_338
{
_338
"key": "EB_width_1",
_338
"ch": "Eyebrows move inward"
_338
},
_338
{
_338
"key": "EB_width_2",
_338
"ch": "Eyebrows move outward"
_338
},
_338
{
_338
"key": "EB_updown_1",
_338
"ch": "Eyebrows move downward"
_338
},
_338
{
_338
"key": "EB_updown_2",
_338
"ch": "Eyebrows move upward"
_338
},
_338
{
_338
"key": "EB_thickness",
_338
"ch": "Adjust the thickness of eyebrows"
_338
},
_338
{
_338
"key": "EBIN_updown_1",
_338
"ch": "Inner eyebrow moves upward"
_338
},
_338
{
_338
"key": "EBIN_updown_2",
_338
"ch": "Inner eyebrow moves downward"
_338
},
_338
{
_338
"key": "EBMID_updown_1",
_338
"ch": "Middle eyebrow curved upward"
_338
},
_338
{
_338
"key": "EBMID_updown_2",
_338
"ch": "Middle eyebrow concave"
_338
},
_338
{
_338
"key": "EB_length_1",
_338
"ch": "Adjust the length of eyebrows"
_338
},
_338
{
_338
"key": "EB_length_2",
_338
"ch": "Adjust eyebrows to short"
_338
},
_338
{
_338
"key": "EBOUT_updown_1",
_338
"ch": "high position of outer eyebrows"
_338
},
_338
{
_338
"key": "EBOUT_updown_2",
_338
"ch": "low position of outer eyebrow"
_338
}
_338
]
_338
},
_338
{
_338
"type": "Eye",
_338
"shapes": [
_338
{
_338
"key": "E_width_1",
_338
"ch": "Eyes inward"
_338
},
_338
{
_338
"key": "E_width_2",
_338
"ch": "eyes outward"
_338
},
_338
{
_338
"key": "E_updown_1",
_338
"ch": "Eye adjustment"
_338
},
_338
{
_338
"key": "E_updown_2",
_338
"ch": "Under-eye adjustment"
_338
},
_338
{
_338
"key": "IC_width_1",
_338
"ch": "Inner corner of eye facing inward"
_338
},
_338
{
_338
"key": "IC_width_2",
_338
"ch": "Inner corner of eye facing outward"
_338
},
_338
{
_338
"key": "IC_updown_1",
_338
"ch": "Inner corner of eye upward"
_338
},
_338
{
_338
"key": "IC_updown_2",
_338
"ch": "Inner corner of eye down"
_338
},
_338
{
_338
"key": "UEIN_updown_1",
_338
"ch": "The front of the upper eyelid is pointing upward"
_338
},
_338
{
_338
"key": "UEIN_updown_2",
_338
"ch": "The front of the upper eyelid is facing downward"
_338
},
_338
{
_338
"key": "UE_updown_1",
_338
"ch": "Upper eyelid upward"
_338
},
_338
{
_338
"key": "UE_updown_2",
_338
"ch": "Upper eyelids move downwards as a whole"
_338
},
_338
{
_338
"key": "UEOUT_updown_1",
_338
"ch": "The upper eyelid ends upward"
_338
},
_338
{
_338
"key": "UEOUT_updown_2",
_338
"ch": "Upper eyelid ends downward"
_338
},
_338
{
_338
"key": "LE_updown_1",
_338
"ch": "Lower eyelid downward"
_338
},
_338
{
_338
"key": "LE_updown_2",
_338
"ch": "Lower eyelid upward"
_338
},
_338
{
_338
"key": "OC_width_1",
_338
"ch": "Outer corner of eye moves inward"
_338
},
_338
{
_338
"key": "OC_width_2",
_338
"ch": "Outer corners of the eyes turn outward"
_338
},
_338
{
_338
"key": "OC_updown_1",
_338
"ch": "Outer corner of eye up"
_338
},
_338
{
_338
"key": "OC_updown_2",
_338
"ch": "Outer corner of eye down"
_338
},
_338
{
_338
"key": "E_rotate_1",
_338
"ch": "Eye rotation 1"
_338
},
_338
{
_338
"key": "E_rotate_2",
_338
"ch": "Eye rotation 2"
_338
},
_338
{
_338
"key": "E_size_1",
_338
"ch": "Enlarge the eyes as a whole"
_338
},
_338
{
_338
"key": "E_size_2",
_338
"ch": "The eyes shrink overall"
_338
},
_338
{
_338
"key": "EL_updown_1",
_338
"ch": "Eyelids become wider"
_338
},
_338
{
_338
"key": "EL_updown_2",
_338
"ch": "eyelid distance narrows"
_338
}
_338
]
_338
},
_338
{
_338
"type": "Nose",
_338
"shapes": [
_338
{
_338
"key": "N_width_1",
_338
"ch": "Enlarge the nose left and right"
_338
},
_338
{
_338
"key": "N_width_2",
_338
"ch": "The nose shrinks left and right"
_338
},
_338
{
_338
"key": "N_updown_1",
_338
"ch": "nose up"
_338
},
_338
{
_338
"key": "N_updown_2",
_338
"ch": "nose down"
_338
},
_338
{
_338
"key": "NB_raise_1",
_338
"ch": "convex nose"
_338
},
_338
{
_338
"key": "NB_raise_2",
_338
"ch": "concave nose"
_338
},
_338
{
_338
"key": "NT_size_1",
_338
"ch": "Enlarge the nose tip as a whole"
_338
},
_338
{
_338
"key": "NT_size_2",
_338
"ch": "Nose tip overall reduction"
_338
},
_338
{
_338
"key": "NW_width_1",
_338
"ch": "The nose wings are stretched outward"
_338
},
_338
{
_338
"key": "NW_width_2",
_338
"ch": "The nose wings are stretched inwards"
_338
},
_338
{
_338
"key": "NW_updown_1",
_338
"ch": "Stretch on nose wing"
_338
},
_338
{
_338
"key": "NW_updown_2",
_338
"ch": "Stretch under nose"
_338
}
_338
]
_338
},
_338
{
_338
"type": "Mouth",
_338
"shapes": [
_338
{
_338
"key": "UL_width_1",
_338
"ch": "Upper lip widens"
_338
},
_338
{
_338
"key": "UL_width_2",
_338
"ch": "upper lip narrowing"
_338
},
_338
{
_338
"key": "LL_width_1",
_338
"ch": "Lower lip widens"
_338
},
_338
{
_338
"key": "LL_width_2",
_338
"ch": "lower lip narrowing"
_338
},
_338
{
_338
"key": "MC_updown_1",
_338
"ch": "upward curve of the mouth corner"
_338
},
_338
{
_338
"key": "MC_updown_2",
_338
"ch": "corner of mouth curved downward"
_338
},
_338
{
_338
"key": "M_size_1",
_338
"ch": "Enlarge the mouth left and right"
_338
},
_338
{
_338
"key": "M_size_2",
_338
"ch": "The mouth shrinks left and right"
_338
},
_338
{
_338
"key": "M_updown_1",
_338
"ch": "The mouth moves downward"
_338
},
_338
{
_338
"key": "M_updown_2",
_338
"ch": "The mouth moves upward"
_338
}
_338
]
_338
}
_338
]
_338
},
_338
{
_338
"avatar": "huamulan",
_338
"blendshape": []
_338
}
_338
]
_338
}

Dress-up resources

This section introduces the virtual human dress-up resources provided by the MetaKit extension.

Girl

The parts of the girl's outfit and their corresponding resource IDs are as follows:

Clothing item/Body partResource ID
Hair10000, 10001, 10002
Eyebrows10100, 10101, 10102
Blush10401, 10402
Headdress10801
Tops and jackets12100, 12101, 12102
Gloves12501
Pants14100, 14101, 14102
Socks14301
Shoes15000, 15001, 15002

JSON example

The complete JSON for the replacement is as follows:


_87
{
_87
"dressResources": [
_87
{
_87
"avatar": "girl",
_87
"resources": [
_87
{
_87
"id": 100,
_87
"name": "Hair",
_87
"assets": [
_87
10000,
_87
10001,
_87
10002
_87
]
_87
},
_87
{
_87
"id": 101,
_87
"name": "Eyebrows",
_87
"assets": [
_87
10100,
_87
10101,
_87
10102
_87
]
_87
},
_87
{
_87
"id": 104,
_87
"name": "Blush",
_87
"assets": [
_87
10401,
_87
10402
_87
]
_87
},
_87
{
_87
"id": 108,
_87
"name": "Headdress",
_87
"assets": [
_87
10801
_87
]
_87
},
_87
{
_87
"id": 121,
_87
"name": "Tops and Jackets",
_87
"assets": [
_87
12100,
_87
12101,
_87
12102
_87
]
_87
},
_87
{
_87
"id": 125,
_87
"name": "Gloves",
_87
"assets": [
_87
12501
_87
]
_87
},
_87
{
_87
"id": 141,
_87
"name": "Pants",
_87
"assets": [
_87
14100,
_87
14101,
_87
14102
_87
]
_87
},
_87
{
_87
"id": 143,
_87
"name": "Socks",
_87
"assets": [
_87
14301
_87
]
_87
},
_87
{
_87
"id": 150,
_87
"name": "Shoes",
_87
"assets": [
_87
15000,
_87
15001,
_87
15002
_87
]
_87
}
_87
]
_87
},
_87
{
_87
"avatar": "huamulan",
_87
"resources": []
_87
}
_87
]
_87
}

vundefined