vault backup: 2024-11-28 08:32:20
This commit is contained in:
parent
97f314dbb4
commit
efde68629c
17 changed files with 234 additions and 24 deletions
|
@ -68,3 +68,154 @@ Problemi nel matching:
|
|||
- Le distorsioni non lineari sono causate da rotazioni del dito o livelli di pressione diversi (se schiaccio di più o di meno)
|
||||
- Overlap scarso tra le due acquisizioni: rilevante soprattutto in sensori con un'ara di acquisizione ridotta.
|
||||
|
||||
- Too much movement and/or distortion
|
||||
- Little overlap between the template and the imprint in input. Particularly relevant for sensors with small acquisition area
|
||||
- Non-linear distortion of the skin
|
||||
- The acquisition of a fingerprint entails mapping a three-dimensional shape onto the two-dimensional surface of the sensor. In this way a non-linear distortions is produced, due to the elasticity of the skin, which may vary among subsequent acquisitions of the same footprint
|
||||
- Variable pressure and skin conditions
|
||||
- Uneaven pressure, fingerprint too dry or too wet, dirt on the sensor, moisture in the air...
|
||||
- Errors in feature extraction algorithms
|
||||
|
||||
|
||||
![[Pasted image 20241127134548.png]]
|
||||
|
||||
### Segmentation
|
||||
The term indicates the separation between the foreground fingerprint from the background which is isotropic (i.e. rotating the white background, the image stays the same)
|
||||
|
||||
Anisotropy: the property of being directionally dependent (as opposed to isotropy).
|
||||
|
||||
Characteristic of the fingerprints are directionally dependent, we can use this to separate the fingerprint from the background.
|
||||
|
||||
Once a fingerprint has been segmentated we can start extracting macro-features such as:
|
||||
- **ridge-line flow:** described by a structure called **directional map** (or directional image) which is a discrete matrix whose elements denote the orientation of the tangent to the ridge lines.
|
||||
- analogously, the ridge line density can be synthesized by using a density map.
|
||||
![[Pasted image 20241127135348.png]]
|
||||
|
||||
##### Directional map
|
||||
The local orientation of the ridge line in the position [i, j] is defined as the angle $\theta(i,j)$ formed by considering the horizontal line at that point, and a point on the ridge line sufficiently closer to [i, j].
|
||||
![[Pasted image 20241127224325.png]]
|
||||
- most approaches use a grid measure (instead of measuring at each point)
|
||||
- the directional image D is a matrix in which each element denotes the average orientation of the ridge (of the tangent to the ridge) in a neighborhood of $[x_{i}, y_{i}]$
|
||||
|
||||
- the simplest approach for extraction and natural orientation location is based on the computation of the gradient of the image
|
||||
- the estimation of a single orientation is too sensitive to noise, however, an average of gradients cannot be done due to circularity of corners
|
||||
- the concept of average orientation is not always well defined: what is the average of two orthogonal orientations of 0 and 90 deg? We have 4 possible different averages!
|
||||
- some solutions imply doubling the angles and considering separately the averages along the two axes
|
||||
|
||||
##### Frequency map
|
||||
the frequency of the local ridge line $f_{xy}$ at the point $[x, y]$ is defined as the number of ridges per unit length along a hypothetical segment centered at $[x, y]$ and orthogonal to the orientation of the local ridge.
|
||||
- by estimating the frequency in discrete locations arranged in a grid, we can compute a frequency image F:![[Pasted image 20241127225853.png]]
|
||||
- a possible approach is to count the averaage number of pixels between consecutive peaks of gray levels along the direction orthogonal to the local orientation of the ridge line
|
||||
|
||||
##### Singularities
|
||||
Most of approaches are based on directional map.
|
||||
A practical and elegant approach is to use the Poincaré index
|
||||
- let G be a vector field (campo vettoriale). G is the field associated with the image of the orientations of the fingerprint image D ($[i, j]$ is the position of the element $\theta_{ij}$)
|
||||
- let C be a curve immersed in G, is a closed path defined as the ordered sequence of elements of D, such that $[i, j]$ are internal points.
|
||||
- the Pointcaré index $P_{G,C}$ is defined as the total rotation of the vectors of G along C
|
||||
- $P_{G,C}(i, j)$ is computed by performing the algebric sum of the differences of orientation between adjacent elements in C
|
||||
![[Pasted image 20241127230718.png]]
|
||||
![[Pasted image 20241127233105.png]]
|
||||
|
||||
### Minutiae extraction
|
||||
Many approaches extract minutiae and perform matching based on them or in combination with them.
|
||||
In general, minutiae extraction entails:
|
||||
- **Binarization:** converting a graylevel image into a binary image
|
||||
- **Thinning:** the binary image undergoes a thinning step that reduces the thinckness of the ridge lines to 1 pixels
|
||||
- **Location:** a scan of the image locates pixels corresponding to the minutiae
|
||||
![[Pasted image 20241127140214.png]]
|
||||
|
||||
To locate minutiae we can analyze the crossing number $$cn(p)=\frac{1}{2}\sum_{i=1...8}|val(p_{i\ mod\ 8})-val(p_{i-1})|$$
|
||||
- p0, p1, ..., p7 are the pixels in the neighborhood for p and $val(p) \in \{0, 1\}$ is the value of pixel p
|
||||
- a pixel p with $val(p) = 1$:
|
||||
- is an internal point of a ridge line if $cn(p)=2$
|
||||
- corresponds to a termination if $cn(p)=1$
|
||||
- corresponds to a bifurcation if $cn(p)=3$
|
||||
- belongs to a more complex minutia if $cn(p) > 3$.
|
||||
![[Pasted image 20241127140836.png]]
|
||||
- cn(p) = 2: internal point
|
||||
- cn(p)=1: termination
|
||||
- cn(p)=3: bifurcation
|
||||
- belongs to a more complex minutia if cn(p) > 3.
|
||||
|
||||
a feature often used is the **ridge count**: number of ridges intersected by the segment between two points (often the points are chosen as relevant one, e.g. core and delta).
|
||||
|
||||
### Hybrid approach
|
||||
Hybrid method based on comparison of minutiae texture: combines the representation of fingerprints based on minutiae with the representation based on Gabor filter that uses local texture information
|
||||
|
||||
![[Pasted image 20241127141648.png]]
|
||||
|
||||
##### Image alignment
|
||||
- extraction of minutiae from both the input and from the template to match
|
||||
- the two sets of minutiae are compared through an algorithm of point matching that preliminary selects a pair of reference minutiae and then determines the number of matching of minutiae pairs pairs using the remaining set of points
|
||||
- the reference pair that produces the maximum number of matching pairs, determines the best alignment
|
||||
- once minutiae have been aligned, also rotation and translation are known
|
||||
- rotation parameter is the average of rotation of all the individual pairs of corresponding minutiae
|
||||
- translation parameters are calculable using spatial coordinates of the pair of reference minutiae which produced the best alignment
|
||||
|
||||
##### Masking and tesselation
|
||||
After masking the background, the images are normalized by building on them a grid that divides them into series of non-overlapping windows of the same size. Each window is normalized with reference to a constant mean and variance. Optimal size for 300 DPI is 30x30, as 30 pixel is the average distance inter-solco.
|
||||
![[Pasted image 20241128000431.png]]
|
||||
|
||||
##### Feature extraction
|
||||
In order to perform the feature extraction from the cells resulting from the tessellation, a group of 8 Gabor filters is used, all with the same frequency but variable orientation. Such filtering produces 8 sorted images for each cell.![[Pasted image 20241128000533.png]]
|
||||
|
||||
We consider the mean absolute deviation (average of absolute deviation of the value, from the value of a central point) of the intensity in each filtered cell as a feature, so we have 8 of them per cell (remember 8 gabor filters), they are concatenated into a characteristic vector.
|
||||
The characteristic values relating to masked regions are not used and marked as missing in the vector.
|
||||
|
||||
**Matching:** sum of the squared differences between corresponding characteristic vectors, after discarding missing values.
|
||||
The similarity score is combined with that obtained with the comparison of minutiae, using the rule of sum of the combinations.
|
||||
Recognition is successful if the score is below a threshold.
|
||||
|
||||
### Fingerphotos
|
||||
**Pros:**
|
||||
- no special sensor required
|
||||
- contactless (more hygienic)
|
||||
**Cons:**
|
||||
- subject to common problems in image processing: illumination, position with respect to the camera, blurriness, etc.
|
||||
- some pre-processing steps require more sophisticated approaches.
|
||||
|
||||
#### An example of pre-processing
|
||||
##### Image rotation
|
||||
using PCA we detect the largest connected component and compute its orientation, so we can rotate the image and have the finger straight.
|
||||
##### Background removal
|
||||
- image is converted in grayscale
|
||||
- canny edge to detect contours
|
||||
- openCV findContours() to find the larger area component
|
||||
- mask the rest (mask with white corresponding to the larger contour, all the rest is black)
|
||||
- gaussian blur of the mask
|
||||
- fusion with the original image
|
||||
##### Finger cropping
|
||||
- openCV extract_finger() + .rectangle() to crop the finger
|
||||
##### ROI cropping
|
||||
- based on height and width of the finger, crop the fingertip
|
||||
|
||||
##### ROI enhancement
|
||||
- normalization of the image
|
||||
- openCV .createCLAHE()
|
||||
- openCV .gaussianBlur()
|
||||
|
||||
##### Ridge extraction
|
||||
- adaptive thresholding
|
||||
- morphological operators
|
||||
- thinning
|
||||
|
||||
|
||||
### Fake fingerprints
|
||||
It's not difficult. It's possible to use materials such as gelatine, silicone or latex.
|
||||
|
||||
A safety measure is **liveness detection**, possible approaches:
|
||||
- measure pulse and temperature
|
||||
- bloodstream and its pulsation can be detected with a careful measurement of the light reflected or transmitted through the finger
|
||||
- live-scan scanners with FTIR (Frustrated Total Internal Reflection) technology use a mechanism of differential acquisition for the ridges and furrows of fingerprints
|
||||
- the high resolution scan of fingerprints reveals details characteristic of the structure of the pores, difficult to imitate in an artificial finger
|
||||
- the characteristic color of the skin changes due to the pressure when it is pressed on the scanning surface
|
||||
- the (electric) potential difference between two specific points of the finger
|
||||
- electric impedance is useful to check the vitality of the finger
|
||||
- a finger sweats
|
||||
|
||||
|
||||
### AFIS or human
|
||||
The problem of digital fingerprint recognition is not really fully solved! A human expert in manual techniques is still better in comparing fingerprints than an automated system.
|
||||
|
||||
However, automated systems are fast, reliable, consistent and low-cost.
|
2
Biometric Systems/notes/12. Iris recognition.md
Normal file
2
Biometric Systems/notes/12. Iris recognition.md
Normal file
|
@ -0,0 +1,2 @@
|
|||
Iris texture is almost completely randotipic, that's why it's useful for recognition.
|
||||
|
Loading…
Add table
Add a link
Reference in a new issue