09 - Face recognition
Accuracy vs. acceptability
In the context of biometrics, accuracy/reliability and acceptability are two crucial factors that determine the feasibility and effectiveness of a biometric system:
-
Accuracy/Reliability:
- Accuracy refers to how well the biometric system can correctly identify or verify an individual. It measures the system’s ability to distinguish between different biometric patterns (such as fingerprints, facial features, or iris scans) accurately.
- Reliability refers to the consistency of the biometric system in producing accurate results over time and under various conditions. A reliable system provides consistent and dependable performance regardless of environmental factors like lighting, noise, or variations in the biometric trait.
A biometric system with high accuracy and reliability ensures that individuals are correctly identified or verified, reducing the chances of false positives (incorrectly accepting unauthorized users) and false negatives (incorrectly rejecting authorized users).
-
Acceptability:
- Acceptability refers to the willingness of people to use the biometric system. It encompasses the users’ comfort level, trust, and perceived privacy concerns associated with providing their biometric data for identification or verification purposes.
For a biometric system to be effective, it must be widely accepted by the users. Even if a biometric system is highly accurate and reliable, if it is not acceptable to the people using it, they may resist its implementation. Factors such as cultural norms, privacy laws, and user education play a significant role in determining the acceptability of a biometric system
Introduction
- it is used in many fields such as forensics, face recognition app, border control, in a crowd
- low cost
- high acceptability
Problems
- A-PIE face variations: Ageing, Pose, illumination, Expression
- high likelihood between two different guys
Most popular databases
- FERET (different light conditions, aging)
- AR-Faces (wearing sun glasses, scarfs, lighting conditions)
How
- face capture and possible image enhancement.
- Further steps are localization possible cropping of one or more regions of interest (ROIs) containing the whole face or its components (eyes, nose, mouth) - normalization.
- feature extraction and template construction (biometric key)
Approaches
- Feature-based techniques
- based on pixel properties (edges, skin color)
- face geometry properties (constellation, feature searching)
- template matching against a standard model
- Image-based techniques
- uses a neural networks to learn to recognize a face image. It is a generic pattern recognition.
Algorithms
- Hsu, Mottaleb and Jain → image light manipulation: we create several masks, we stack them and we create a new image with light around important parts (eyes, mouth…)
- triangulation: if we find eyes and mouth we can draw a triangle
- Viola-Jones → the most used one, included in OpenCV. It features an image-recognition snf works on a class of patterns
- The training is very slow (it can take days)
- The locating procedure is very efficient (real-time operation).
- It works in this way: we’re interested in finding a face. We divide the image into a matrix and make a parallel matrix, called integral image, where each cell is the sum of the previous original cells.
- 1 7 4 2 9 original
- 1 8 12 14 23 integral
- then we use DL to understand if is there a face or not by dividing the image in small sections and doing the integral calculation
- we have 4 main kinds of features, and if any of those fails, then it’s not a face
- in each calculation ,we calculate the difference between the sum of pixel values in two/three adjacent rectangles
- nose-eyes: Two-Rectangle Horizontal
- nose-bridge-eyes: Three-Rectangle Feature
- Two-Rectangle Vertical
- Four-Rectangle Feature
- To learn more: Detecting Faces (Viola Jones Algorithm) - Computerphile
Cheat
Weak point: the key part of the face that computers can read is the “nose bridge,” or the area between the eyes
- Obscure that area
- Invert your face’s color scheme by drawing particular black and white triangles.
How
- False positives percentage of windows classified as faces that do not contain any face
- Not localized faces percentage of faces that have not been identified
- C-Error Localization error: Euclidean distance between the real center of the face and the one estimated by the system, normalized with respect to the sum of the axes of the ellipse containing the face
2D Recognition
-
Eigenfaces Algorithm:
- use principal component analysis (PCA) to represent facial features as eigenfaces, which are essentially the principal components of the face images. These eigenfaces capture the significant features of faces, allowing for accurate recognition.
- reduce facial images to a lower-dimensional space, making computations faster.
- Decent Accuracy: It performs well under controlled lighting and face orientation conditions.
- Sensitive to Lighting: Eigenfaces are sensitive to variations in lighting conditions, affecting recognition accuracy.
-
Local Binary Patterns (LBP) Algorithm:
- LBP compares the pixel values of a central pixel with its surrounding neighbors, encoding the pattern as a binary number. These patterns capture local texture information in the face.
- robust to illumination change and it is really fast
- Limited to Texture: LBP primarily captures texture information and might not be as effective in capturing global facial features.
- Sensitive to Noise: It can be sensitive to noise and might result in less accurate recognition if the input image is noisy.
-
Systems based on Graphs
- filters and localization functions to localize a set of reference points on the face, that will be connected by weighted archs so to create a graph.
- Each face is associated with a graph: matching two faces means matching two graphs.
- robust with pose variations and illumination
- testing is very hard, because it is a np-hard problem (graph matching)
-
Thermogram
- The face image is acquired through a thermic sensor that detects temperature variations from face skin.
- it is subject to changes from person to person and from their mood
-
CNN: nowadays machine learning can solve all the classic problems of the biometric including bad light conditions
3D face recognition
- set of points on some planes that connected could give a face in output
Acquisition
- stereoscopic cameras
- structured light scanner (measure the depth of each scanned point)
- the more dangerous the better
a) Acquisition and generation of the face. b) Projection of geometry from 3D space to a 2D space. c) Generation of the normal map.
Methods
-
3D Morphable Model (3DMM)
- Explanation: 3DMM represents a face as a set of parameters that define its shape and texture. It builds a statistical model of facial shapes and textures and can reconstruct a 3D face from a 2D image.
- Advantages:
- Can handle variations in facial expressions and poses.
- Offers detailed 3D facial information for accurate recognition.
- Disadvantages:
- Requires a large database for effective modeling.
- Sensitive to lighting and texture variations.
-
Depth-Based Approaches
- Explanation: Depth-based methods use depth information obtained from 3D sensors (like Kinect) or stereoscopic cameras to create a 3D representation of the face. Recognition is performed on these 3D face models.
- Advantages:
- Less affected by changes in lighting conditions compared to 2D methods.
- Provides accurate depth information, making it robust against spoofing attacks.
- Disadvantages:
- Dependence on specialized hardware (depth sensors) can limit practicality.
- Limited availability and higher cost of 3D sensors.
The additional dimension (depth), increased variability (face expressions), sensor dependency (the sensors are not always available), computational intensity (computationally more hard), and data acquisition challenges (subjects need to be properly positioned and well-lit) make 3D face recognition inherently more complex and difficult than its 2D counterpart.
Face antispoofing
Spoofing vs. Camouflage
-
Spoofing: the deliberate attempt to deceive a biometric system by using fake or altered biometric data to gain unauthorized access. This could involve using photographs, videos, or other fabricated biometric information to impersonate someone else. Spoofing attacks can target various biometric modalities such as facial recognition, fingerprint scanning, iris recognition, or voice recognition. To counter spoofing, biometric systems employ anti-spoofing techniques, which can include liveness detection (see if eyes blink), behavior analysis, or multi-modal biometric systems that combine different biometric modalities to enhance security.
-
Camouflage: Camouflage in the context of biometrics involves altering one’s physical appearance to prevent identification by biometric systems. This could include wearing disguises, makeup, or accessories that obstruct or change facial features, making it difficult for facial recognition systems to accurately identify the person. Camouflage attacks are a concern in facial recognition systems, especially in surveillance applications where individuals might attempt to avoid being detected or recognized.
So:
- spoofing involves using fake or altered biometric data to deceive a system
- camouflage involves altering physical appearance to avoid detection by biometric systems.
Important
The most robust spoofing detection systems in the field of face recognition rely on two main activities: verification of face three-dimensionality (hard) and interaction with the user (time, content → ask to do a precise movement every xy random time amount but track motion)
FATCHA
- CAPTCHA are hard for machine, but for users too
- everything could be solved with a facial captcha
- by requiring to capture the user and requires some simple movement
- for example to blink or show a happy expression