Migrating from the old API¶
Headers¶
To use the new API you need to include:
#include <idliveface/idliveface.h>
Instead of the old FaceSDK header:
#include <facesdk/FaceSDK.h>
Initialization¶
In the old SDK the configuration was done by creating a configuraion from a pipeline description, and using that to create a pipeline object. In the new API however, the entry point to the IDLive Face is called the Blueprint
. It's a factory for all main SDK objects, and it's also used to configure the IDLive Face.
#include <idliveface/idliveface.h>
idliveface::Blueprint blueprint(IDLIVEFACE_HOME + "/data");
idliveface::FaceAnalyzer analyzer = blueprint.CreateFaceAnalyzer();
#include <facesdk/FaceSDK.h>
auto config_path = IDLIVEFACE_HOME + "/data/pipelines/astraea.json";
facesdk::InitConfigPtr config = facesdk::InitConfig::createConfig(config_path);
facesdk::PipelinePtr pipeline = facesdk::FaceSDK::getPipeline("ConfigurablePipeline", config);
Load image¶
The new API provides a decoder object that decodes images and loads them to memory. It sill retains the option to create an image from raw memory data
#include <idliveface/idliveface.h>
idliveface::Blueprint blueprint(IDLIVEFACE_HOME + "/data");
idliveface::ImageDecoder decoder = blueprint.CreateImageDecoder();
idliveface::Image image = decoder.DecodeFile(IDLIVEFACE_HOME + "/examples/images/real_face.jpg");
// create image from raw data in memory
uint8_t* pixels = ...;
int32_t width = 600;
int32_t height = 800;
idliveface::Image image(pixels, width, height, PixelFormat::kRGB);
#include <facesdk/FaceSDK.h>
auto image_path = IDLIVEFACE_HOME + "/examples/images/real_face.jpg";
facesdk::ImagePtr image = facesdk::Image::createImage(image_path);
std::vector<uint8_t> image_bytes = ...
facesdk::ImagePtr image_from_vector = facesdk::Image::createImage(image_bytes);
facesdk::ImagePtr image_from_buffer = facesdk::Image::createImage(image_bytes.data(), image_bytes.size());
Perform liveness check¶
To analyze the image, use the face analyzer. It is similar to the checkLiveness
call from the old API but will return more information. See liveness check
idliveface::FaceAnalysisResult result = analyzer.Analyze(image);
if (result.status == idliveface::FaceStatus::kInvalid) {
// Ask to retake the image, use result.failed_validations as hints
} else if (result.status == idliveface::FaceStatus::kNotGenuine) {
// Reject the image
} else {
// Accept the image
}
facesdk::PipelineResult result = pipeline->checkLiveness(image);
if (result.liveness_result.probability >= 0.5) {
// Image is live
} else {
// Image is spoofed
}
Error handling¶
However the API's differ on how the image rejection or failed validations are treated so they need to be handled differently.
While in the old API and exception was thrown whenever an image was rejected, in the new API it's not the case, logical errors are part of the face analysis result. You can see the list of image rejections of the old sdk here
The new API will still throw errors when: An invalid parameter has been passed, an unrecoverable internal error happened, when an image cannot be decoded, when the license expired or when a new license cannot be set.
For example, let's compare a situation where an image is analysed or checked for liveness but the face is cropped:
idliveface::FaceAnalysisResult result = analyzer.Analyze(image);
if (result.status == idliveface::FaceStatus::kInvalid) {
for ( const auto& failed_validation : result.failed_validations) {
std::cerr << failed_validation;
}
}
try {
facesdk::PipelineResult result = pipeline->checkLiveness(image);
} catch (const idliveface::Exception& e) {
std::cout << "Error: " << e.what() << std::endl;
}
In the old API's result there was an also an image quality value calculated from all the image quality checks, currently underexposure. So while in the past you may had to check the quality result to determine if a rejection was due to the bad quality, in the new API if the quality is no enough you will get a failed validation and no liveness result.