Intern
Secure Software Systems Group

Liveness Detection for Facial Authentication Systems

Facial authentication systems are being used globally to ease the authentication process of various applications. Attackers have become interested in bypassing such systems in order to gain illicit access to restricted areas. Aggravatingly, past research has shown that state-of-the-art facial authentication systems are vulnerable to simple attacks. This project works on a Machine Learning based solution that detects the presence of attacks and thereby provides resilience for existing facial authentication systems.

Facial authentication is an efficient solution for both virtual and physical applications that require identity verification. The corresponding algorithm searches for prominent landmarks (eyes, nose, mouth, etc.) and captures a high number of samples from across the face. These samples form a face representation are individual for each user. Through attackers’ aim to gain illicit access to actions or information, they are increasingly faced with the challenge of bypassing such biometrics-based systems. Previous research has confirmed that state-of-the-art facial recognition systems are vulnerable to simple physical spoofing attempts as attackers can easily provide the information that facial authentication algorithms search for (landmarks, representation vectors, etc.). Basic attacks that are known to successfully pass facial authentication systems include the presentation of paper printouts that depict an authorized user's face. More sophisticated attacks present video recordings of victim users via display devices or utilize 3D masks that mimic the victim’s face. While such attacks may fool facial authentication systems, they also contain visible abnormalities (visible edges of the printout, visible pixels of the presented display, etc.) that indicate the presence of an attack.

Machine Learning-based Liveness Detection models are designed and trained to detect attack indicators in order to distinguish between benign (live) authentication attempts and attacks. There are multiple different tools and visual indicators that can be used for liveness detection. These include optical and infrared sensors, micro-motion detection, and color texture analysis. However, not all tools provide the desired compatibility or efficiency.

This project investigates state-of-the-art liveness detection models and introduces novel approaches that promise to improve the current performance barrier in order to provide even more accurate classification. Simultaneously, the introduced approaches aim to be highly compatible with current smartphone technologies and to be efficient in regard to computing time and storage. The project’s proposed solutions focus on the incorporation of color texture information as training feature for deep learning and on the modification of operators that are found within convolutional neural networks in order to place liveness awareness into the core of neural networks.

People involved: Prof. A. DmitrienkoMoritz Finke.

The project is funded by KOBIL Security Systems GmbH (2021-2022).

 

 

Publications

2024[ to top ]
  • Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning. Finke, Moritz; Dmitrienko, Alexandra; in ArXiv | arXiv:2408.14829 (2024).