Liveness Detection and Face Matching for Presentation Attacks Detection
Forget passwords, fight the fake! Stop photo & mask fraud with liveness detection + face recognition. Effortless logins, zero fraud. (KYC/AML ready)
July 24, 2025

Face Matching (aka Face Verification or Face Recognition), as a critical process under the Behavioral based ID Verification System, represents an automated flow empowered with machine learning technology. Starting with face scanning step, it captures, analyze, identify, and verify each customer’s unique facial features. It’s crucial for allocating a single and trusted identity on each customer to be engaged. Without any human-involvement in identity verification process, the solution provides a seamless user experience with robust security.
Within the scope of e-KYC (Know-Your-Customer) requirements and Anti-money laundering (AML) rules, a trusted identity verification system, especially strengthened by strong biometric proofing capacity as face matching and liveness detection technology, are becoming more and more a critical component of any business’ regulatory compliance and digital reputation strategy against new types of cyber-attacks.
Similarly, in the digital onboarding process, Presentation Attacks are one of the primary attack types directed at customer identities. The increasing prevalence of behavioral authentication systems, the rise of AI-powered phishing attempts, the upgrading of AML measures by mid-2025 with increased sanctions in the events of serious breaches and the scrutiny of riskiest institutions will make this issue much more crucial not only for financial institutions but also for non-financial actors.
In this article, we aim to address some questions regarding vulnerability and fraud by presentation attacks in facial biometric systems and how we use technology to deteck it.
What are the Presentations Attacks?
Identity verification system based on a set of sophisticated technologies is the combination of activities during remote user interaction. In digital onboarding process, the verification system consists of three pillars; the real-world identity claimed exists and the identity claimed by its true owner, and the identity owner presents directly for this claiming during this interaction. (Gartner)
Typically, in step-up based onboarding process, there must be pure assurance about the authenticity and the integrity on authentication steps, range from identity document submitting to selfie or face video capturing. To do so, during these steps, the verification tools guaranty the genuine presence of the ID owner. Even the presentation attacks as a biometric spoof method - based on fingerprint, voice, face- attempt to manipulate and fool the functionality of the security tools and device for identity theft purposes, the proactively differentiated and securitized verification measures can establish the real-time protection against the emerging threat like artificial biometric trait presentation.
Facial biometric systems are gaining ground in many use cases, such as digital customer enrollment, verifying access to web services, unlocking cell phones, physical access to offices or sporting events. For example, a presentation attack may attempt fooling a facial verification system by using fake or manipulated facial images or videos. In an attempt to trick facial verification systems, attackers use replicas of authorized users' faces, including printed photos, digital images, and even lifelike masks.
How to use Presentation Attack Detections in a facial biometric system?
In this article, we will focus on how to detect presentation attacks in face recognition systems. We will see what Presentation Attack detection is.
In fraud attempts, the attacker presents biometric trials to the biometric capture subsystem (the camera and/or the software responsible for taking a photo of a face) in a way that interferes unfavorably with the decision-making mechanism, which determines whether a face belongs to the authenticated or identified person. In other words, it doesn't change anything in the system but tricks the system's decision-making mechanism.
The decision-making processes are targeted using an object designed to resemble the person being impersonated or an object that doesn't resemble anyone but allows the attacker to conceal their identity. In facial biometrics, the sensor is the camera. Fake biometric features—such as a photo on a screen, a printed photo, a realistic mask, etc.—have previously been taught to the sensors, and countermeasures have been implemented against them.
These countermeasures are developed in accordance with the regulatory framework related to presentation attacks (ISO/IEC 30107-3:2023). The regulation classifies attacks based on the effectiveness level of the attackers. This approach establishes a methodological framework for risk analysis and the access management system's resilience.
Some evaluators, like iBeta Quality Assurance, categorize attack types based on two difficulty levels:
Level 1The attacker is unaware of the algorithms used by the biometric solution being targeted. It is estimated that the attacker can spend up to 8 hours on the same subject or species of attack, with a budget of approximately $30 for each device to be created. These types of attacks can be carried out using mobile devices, paper, and other office supplies, making them relatively easy to execute.
Level 2The attacker knows the details of the algorithms used by the biometric solution. It is estimated that the attacker can spend up to 48 -96 hours on the same on the same subject or species of attack with a budget of approximately $300 to build each device and equipment such as a 3D printer, resin mask, latex mask.
In the face of increasing risk factors and continuously evolving attack types, two fundamental solutions and their interoperability are becoming increasingly important. These solutions are facial recognition systems and liveness detection.
How to use Face Matching Technology in e-KYC Pipeline?
Face Matching - as a biometric authentication method empowered by artificial intelligence - is a technology that verifies or identifies a person's identity using unique facial features. It establishes a unique biometric metric by analyzing elements such as the distance between the eyes, the shape of the nose and so on. This template represents a preference, used for identifying and comparing individuals in different databases.
In the digital onboarding stream within the KOBIL KYC pipeline, the e-KYC journey progresses to face verification using biometric information obtained from the ID chip. Initially, the system extracts high-quality, verified images from the chip embedded in the ID document. Simultaneously, the customer captures a well-lit selfie in a controlled environment using their mobile device. Both images undergo preprocessing, which includes ensuring image quality standards such as resolution, focus, and illumination, and normalizing the images for uniformity in angle, scale, and lighting. Deep learning models then extract key facial features from both images, identifying unique landmarks like the distance between the eyes, nose shape, mouth corners, and jawline contour. An AI-based face matching algorithm compares these features and calculates a similarity score. A predefined threshold score determines the verification decision; if the similarity score exceeds this threshold, the system confirms that the individual presenting the ID document and taking the selfie is the same person. This rigorous verification process, combining advanced AI algorithms with robust biometric data, is essential for preventing identity theft and ensuring that only legitimate users are onboarded, thereby enhancing the security and reliability of the e-KYC pipeline.
Face Liveness Detection for Final Verification
Liveness detection is used to distinguish whether a claimed identity is genuine or fake by analyzing manipulated static images (such as AI-generated images or content stolen from social media) or videos. It adds an extra layer of security against a wide range of digital face manipulations and attacks, such as 2D/3D printed masks, deepfakes, printed photos, replay attacks, face morphing, virtual reality avatars, cosmetic surgery, makeup, and drastic hairstyling.
Liveness detection algorithms, leveraging advanced Computer Vision and Deep Learning techniques, can effectively detect and identify life signs that cannot be replicated with static images or pre-recorded videos. As a result, this technology significantly reduces the risk of impersonation fraud. Liveness detection can be performed actively (with user and sensor interaction) or passively (only sensor interaction), providing high-level ultimate protection.
Liveness detection algorithms verify the presence of a live person by analyzing facial movements and expressions. More specifically, in passive liveness detection, advanced techniques ensure videos’ authenticity without requiring user interaction. Texture analysis examines skin surface quality to detect inconsistencies indicating a photograph or screen display. Micro-movement analysis observes subtle, involuntary movements in facial muscles, while optical flow algorithms track pixel movements across frames for coherence. Spectral analysis differentiates natural light reflection and absorption on human skin from artificial surfaces. These combined techniques ensure robust verification of a live entity, enhancing the security and reliability of the e-KYC process.
To conclude user onboarding within the KOBIL KYC pipeline, the final step involves either active or passive liveness checks. Liveness detection technology determines whether the video is of a live person or a pre-recorded or AI-generated one. In this phase, a video recorded by the customer is meticulously examined for any indications of spoof attacks or fraudulent activities. The active liveness detection module of KOBIL verifies the presence of a live person by issuing challenges such as eye blinking, head movements, and facial expressions, and monitoring the user’s responses. Conversely, KOBIL's passive liveness detection employs advanced techniques, including texture analysis, micro-movement tracking, optical flow, and spectral analysis, to confirm the presence of a live individual. These measures add an additional layer of security, ensuring the authenticity of the KYC process and significantly enhancing its reliability.
Establishing a holistic security in e-KYC
Presentation Attack Detection (PAD) covers a wide range biometric spoofing attempt. These attack types are becoming increasingly sophisticated and effective by AI-driven development. This not only necessitates the utilization of each unique component of the biometric verification system but also the strong setting of sophisticated technologies and the robust combination of hardware and software tools in-streams. Clearly, the ability of businesses to differentiate between human beings and non-living spoofs remains a significant long-term challenge. This necessitates that risk management executives establish robust security capabilities based on hardware and software, with trusted partners.


Embark on Your Digital Journey with Our Solution
See how OneID4All™ and OneAPP4All™ can elevate your business to the next level.