Initialize FaceAI

FaceAI analyzes video streams during a live video session. To perform analysis, bind the FaceAI object to the room in which the video session is taking place.

Method

Analyzes video streams during a live video session.

Method: EnxFaceAI.init(connectedRoomInfo, stream, config, callback)

Parameters

  • connectedRoomInfo : JSON Object. Pass the Response-JSON returned as Callback of EnxRtc.joinRoom() or EnxRoom.connect() method.
  • stream : The Stream Object which will be analyzed. You may analyze Local Stream Object or Remote Stream Object (Stream Reference may be bound in Active Talkers List)
  • config : JSON Object. This is to configure or customise parameter using which the Face AI would analyse. Each Face AI Method’s configuration needs to be passed with initiation process itself. In case you skip any method’s configuration, the same method would still work with its default behaviour.
    • faceDetector : JSON Object. This is to configure or customise the parameter which the Face Detector would analyse.
      • maxInputFrameSize : Number. Default 160 (pixel). Input Frame Size in pixels for Face Detection.
      • multiFace : Boolean. Default true. Enables multi-face detection, i.e. allows to detect more than one face. It can slow down performance on lower-end devices, since the face tracker will be disabled and a full detection will occur for each frame.
    • facePose : JSON Object. This is to configure or customise parameter using which the Face Pose would be analysed.
      • smoothness : Number. Default 0.65. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceAge : JSON Object. This is to configure or customise parameter using which the Face Age would be analysed.
      • rawOutput : Boolean. Default false. It disables all the filters and fires the event even if the prediction has a very poor quality. Set it to true only if you want the raw signal, for example to analyze a single photo.
    • faceGender : JSON Object. This is to configure or customise parameter using which the Face Gender would be analysed.
      • smoothness : Number. Default 0.95. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
      • threshold : Number. Default 0.70. Range 0.5-1. It controls the minimum value of confidence for which mostConfident output returns the predicted gender name instead of undefined.
    • faceEmotion : JSON Object. This is to configure or customise parameter using which the Face Gender would be analysed.
      • enableBalancer : Boolean. Default false. Experimental filter able to adjust emotions, according to the emotional baseline of each person.
      • smoothness : Number. Default 0.95. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceFeatures : JSON Object. This is to configure or customize parameter using which the Face Features would be analyzed.
      • smoothness : Number. Default 0.90. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceArousalValence : JSON Object. This is to configure or customize parameter using which the Face Arousal Valence would be analyzed.
      • smoothness : Number. Default 0.70. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceAttention : JSON Object. This is to configure or customize parameter using which the Face Attention would be analyzed.
      • smoothness : Number. Default 0.83. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
      • riseSmoothness : Number. Same as smoothness, but is applied only when attention value is increasing. By default it has the same value as smoothness parameter.
      • fallSmoothness : Number. Same as smoothness, but is applied only when attention value is decreasing. By default it has the same value as smoothness parameter.
    • faceWish : JSON Object. This is to configure or customize parameter using which the Face Wish would be analyzed.
      • smoothness : Number. Default 0.80. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time.
  • callback : Callback to know if the room is enabled for FaceAI Analysis an that the client point is connected to an active session

Sample Code

faceDetectorConfig = {
maxInputFrameSize: 200,
fullFrameDetection: true
};
facePoseConfig = {
smoothness: 0.65
};
faceAgeConfig = {};
faceGenderConfig = {
smoothness: 0.95,
threshold: 0.70
};
faceEmotionConfig = {
smoothness: 0.95,
threshold: 0.70
};
faceFeaturesConfig = {
smoothness: 0.90
};
faceArousalValenceConfig = {
smoothness: 0.70
};
faceAttentionConfig = {
smoothness: 0.85
};
faceWishConfig = {
smoothness: 0.80
};
config = {
faceDetector: faceDetectorConfig,
facePose: facePoseConfig,
faceAge: faceAgeConfig,
faceGender: faceGenderConfig,
faceEmotion: faceEmotionConfig,
faceFeatures: faceFeaturesConfig,
faceArousalValence: faceArousalValenceConfig,
faceAttention: faceAttentionConfig,
faceWish: faceWishConfig
}
localStream = EnxRtc.joinRoom(token, config, (response, error) => {
if (error && error != null) { }
if (response && response != null) {
const FaceAI = new EnxFaceAI(); // Construct the Object
FaceAI.init(response, localStream, config, function (event) {
// event.result == 0 - All Ok to process
})
}
})