vault backup: 2024-11-28 08:32:20

This commit is contained in:
Marco Realacci 2024-11-28 08:32:20 +01:00
parent 97f314dbb4
commit efde68629c
17 changed files with 234 additions and 24 deletions

View file

@ -13,15 +13,70 @@
"state": { "state": {
"type": "markdown", "type": "markdown",
"state": { "state": {
"file": "Foundation of data science/notes/6 PCA.md", "file": "Biometric Systems/notes/12. Iris recognition.md",
"mode": "source", "mode": "source",
"source": false "source": false
}, },
"icon": "lucide-file", "icon": "lucide-file",
"title": "6 PCA" "title": "12. Iris recognition"
} }
} }
] ]
},
{
"id": "5b545e7467150d86",
"type": "tabs",
"children": [
{
"id": "d0f857717f626133",
"type": "leaf",
"state": {
"type": "pdf",
"state": {
"file": "Biometric Systems/slides/Biometric_System___Notes.pdf",
"page": 29,
"left": 56,
"top": 111,
"zoom": 1
},
"icon": "lucide-file-text",
"title": "Biometric_System___Notes"
}
},
{
"id": "9bf2709eda88f097",
"type": "leaf",
"state": {
"type": "pdf",
"state": {
"file": "Biometric Systems/slides/Riassunto_2021_2022.pdf",
"page": 47,
"left": -4,
"top": 846,
"zoom": 0.81
},
"icon": "lucide-file-text",
"title": "Riassunto_2021_2022"
}
},
{
"id": "cc1957238a7c12e4",
"type": "leaf",
"state": {
"type": "pdf",
"state": {
"file": "Biometric Systems/slides/LEZIONE10_Iris recognition.pdf",
"page": 2,
"left": -42,
"top": 4,
"zoom": 0.3583333333333334
},
"icon": "lucide-file-text",
"title": "LEZIONE10_Iris recognition"
}
}
],
"currentTab": 2
} }
], ],
"direction": "vertical" "direction": "vertical"
@ -77,7 +132,8 @@
} }
], ],
"direction": "horizontal", "direction": "horizontal",
"width": 300 "width": 300,
"collapsed": true
}, },
"right": { "right": {
"id": "bc4b945ded1926e3", "id": "bc4b945ded1926e3",
@ -191,39 +247,41 @@
"companion:Toggle completion": false "companion:Toggle completion": false
} }
}, },
"active": "0d5325c0f9289cea", "active": "029fd45331b34705",
"lastOpenFiles": [ "lastOpenFiles": [
"Foundation of data science/notes/Untitled.md", "Biometric Systems/images/Pasted image 20241127134548.png",
"Foundation of data science/slides/IP CV Basics.pdf", "Biometric Systems/notes/12. Iris recognition.md",
"Biometric Systems/images/architecture - enrollment.png", "Biometric Systems/slides/LEZIONE10_Iris recognition.pdf",
"Biometric Systems/images/Pasted image 20241121085417.png", "Biometric Systems/notes/11. Fingerprints.md",
"Biometric Systems/images/Pasted image 20241121090256.png", "Biometric Systems/slides/LEZIONE11_Fingerprints.pdf",
"Biometric Systems/images/Pasted image 20241121104307.png", "Biometric Systems/images/Pasted image 20241128000533.png",
"Biometric Systems/images/Pasted image 20241121090018.png", "Biometric Systems/images/Pasted image 20241128000431.png",
"Biometric Systems/notes/9. Ear recognition.md",
"Biometric Systems/images/Pasted image 20241127235513.png",
"Biometric Systems/slides/Riassunto_2021_2022.pdf",
"Biometric Systems/slides/Biometric_System___Notes.pdf",
"Biometric Systems/images/Pasted image 20241127233105.png",
"Biometric Systems/images/Pasted image 20241127231119.png",
"Biometric Systems/images/Pasted image 20241127230718.png",
"Biometric Systems/images/Pasted image 20241127225853.png",
"Biometric Systems/images/Pasted image 20241127224325.png",
"Biometric Systems/images/Pasted image 20241127141648.png",
"Biometric Systems/images/Pasted image 20241127140836.png",
"Biometric Systems/notes/dati da considerare.md",
"Foundation of data science/notes/6 PCA.md", "Foundation of data science/notes/6 PCA.md",
"Foundation of data science/slides/IP CV Basics.pdf",
"Foundation of data science/notes/Untitled.md",
"Foundation of data science/slides/FDS_backprop_new.pdf", "Foundation of data science/slides/FDS_backprop_new.pdf",
"Foundation of data science/slides/Untitled 1.md", "Foundation of data science/slides/Untitled 1.md",
"Foundation of data science/slides/more on nn.pdf", "Foundation of data science/slides/more on nn.pdf",
"Autonomous Networking/notes/q&a.md", "Autonomous Networking/notes/q&a.md",
"Autonomous Networking/notes/presentazione presentante.md", "Autonomous Networking/notes/presentazione presentante.md",
"Biometric Systems/notes/2. Performance indexes.md", "Biometric Systems/notes/2. Performance indexes.md",
"Biometric Systems/notes/11. Fingerprints.md",
"Biometric Systems/slides/LEZIONE11_Fingerprints.pdf",
"Biometric Systems/slides/Biometric_System___Notes.pdf",
"Biometric Systems/slides/Riassunto_2021_2022.pdf",
"Biometric Systems/notes/9. Ear recognition.md",
"Biometric Systems/slides/LEZIONE9_Ear recognition.pptx.pdf", "Biometric Systems/slides/LEZIONE9_Ear recognition.pptx.pdf",
"Biometric Systems/notes/dati da considerare.md",
"Biometric Systems/images/Pasted image 20241121024108.png",
"Biometric Systems/images/Pasted image 20241121023854.png",
"Biometric Systems/images/Pasted image 20241121015024.png",
"Biometric Systems/images/Pasted image 20241120134522.png",
"Biometric Systems/slides/LEZIONE8_Face antispoofing 1.pdf", "Biometric Systems/slides/LEZIONE8_Face antispoofing 1.pdf",
"Biometric Systems/images/Pasted image 20241121024042.png",
"Biometric Systems/notes/8 Face anti spoofing.md", "Biometric Systems/notes/8 Face anti spoofing.md",
"Foundation of data science/notes/5 Neural Networks.md", "Foundation of data science/notes/5 Neural Networks.md",
"Biometric Systems/slides/LEZIONE8_Face antispoofing.pdf", "Biometric Systems/slides/LEZIONE8_Face antispoofing.pdf",
"Biometric Systems/slides/LEZIONE7_Face recognition3D.pdf",
"Biometric Systems/notes/7. Face recognition 3D.md", "Biometric Systems/notes/7. Face recognition 3D.md",
"Autonomous Networking/notes/4 WSN Routing.md", "Autonomous Networking/notes/4 WSN Routing.md",
"Autonomous Networking/notes/5 Drones.md", "Autonomous Networking/notes/5 Drones.md",
@ -237,7 +295,6 @@
"Autonomous Networking/notes/2 RFID.md", "Autonomous Networking/notes/2 RFID.md",
"Autonomous Networking/notes/6 Internet of Things.md", "Autonomous Networking/notes/6 Internet of Things.md",
"Biometric Systems/notes/4. Face detection.md", "Biometric Systems/notes/4. Face detection.md",
"Foundation of data science/notes/4 L1 and L2 normalization.md",
"Senza nome.canvas" "Senza nome.canvas"
] ]
} }

Binary file not shown.

After

Width:  |  Height:  |  Size: 481 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 333 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

View file

@ -68,3 +68,154 @@ Problemi nel matching:
- Le distorsioni non lineari sono causate da rotazioni del dito o livelli di pressione diversi (se schiaccio di più o di meno) - Le distorsioni non lineari sono causate da rotazioni del dito o livelli di pressione diversi (se schiaccio di più o di meno)
- Overlap scarso tra le due acquisizioni: rilevante soprattutto in sensori con un'ara di acquisizione ridotta. - Overlap scarso tra le due acquisizioni: rilevante soprattutto in sensori con un'ara di acquisizione ridotta.
- Too much movement and/or distortion
- Little overlap between the template and the imprint in input. Particularly relevant for sensors with small acquisition area
- Non-linear distortion of the skin
- The acquisition of a fingerprint entails mapping a three-dimensional shape onto the two-dimensional surface of the sensor. In this way a non-linear distortions is produced, due to the elasticity of the skin, which may vary among subsequent acquisitions of the same footprint
- Variable pressure and skin conditions
- Uneaven pressure, fingerprint too dry or too wet, dirt on the sensor, moisture in the air...
- Errors in feature extraction algorithms
![[Pasted image 20241127134548.png]]
### Segmentation
The term indicates the separation between the foreground fingerprint from the background which is isotropic (i.e. rotating the white background, the image stays the same)
Anisotropy: the property of being directionally dependent (as opposed to isotropy).
Characteristic of the fingerprints are directionally dependent, we can use this to separate the fingerprint from the background.
Once a fingerprint has been segmentated we can start extracting macro-features such as:
- **ridge-line flow:** described by a structure called **directional map** (or directional image) which is a discrete matrix whose elements denote the orientation of the tangent to the ridge lines.
- analogously, the ridge line density can be synthesized by using a density map.
![[Pasted image 20241127135348.png]]
##### Directional map
The local orientation of the ridge line in the position [i, j] is defined as the angle $\theta(i,j)$ formed by considering the horizontal line at that point, and a point on the ridge line sufficiently closer to [i, j].
![[Pasted image 20241127224325.png]]
- most approaches use a grid measure (instead of measuring at each point)
- the directional image D is a matrix in which each element denotes the average orientation of the ridge (of the tangent to the ridge) in a neighborhood of $[x_{i}, y_{i}]$
- the simplest approach for extraction and natural orientation location is based on the computation of the gradient of the image
- the estimation of a single orientation is too sensitive to noise, however, an average of gradients cannot be done due to circularity of corners
- the concept of average orientation is not always well defined: what is the average of two orthogonal orientations of 0 and 90 deg? We have 4 possible different averages!
- some solutions imply doubling the angles and considering separately the averages along the two axes
##### Frequency map
the frequency of the local ridge line $f_{xy}$ at the point $[x, y]$ is defined as the number of ridges per unit length along a hypothetical segment centered at $[x, y]$ and orthogonal to the orientation of the local ridge.
- by estimating the frequency in discrete locations arranged in a grid, we can compute a frequency image F:![[Pasted image 20241127225853.png]]
- a possible approach is to count the averaage number of pixels between consecutive peaks of gray levels along the direction orthogonal to the local orientation of the ridge line
##### Singularities
Most of approaches are based on directional map.
A practical and elegant approach is to use the Poincaré index
- let G be a vector field (campo vettoriale). G is the field associated with the image of the orientations of the fingerprint image D ($[i, j]$ is the position of the element $\theta_{ij}$)
- let C be a curve immersed in G, is a closed path defined as the ordered sequence of elements of D, such that $[i, j]$ are internal points.
- the Pointcaré index $P_{G,C}$ is defined as the total rotation of the vectors of G along C
- $P_{G,C}(i, j)$ is computed by performing the algebric sum of the differences of orientation between adjacent elements in C
![[Pasted image 20241127230718.png]]
![[Pasted image 20241127233105.png]]
### Minutiae extraction
Many approaches extract minutiae and perform matching based on them or in combination with them.
In general, minutiae extraction entails:
- **Binarization:** converting a graylevel image into a binary image
- **Thinning:** the binary image undergoes a thinning step that reduces the thinckness of the ridge lines to 1 pixels
- **Location:** a scan of the image locates pixels corresponding to the minutiae
![[Pasted image 20241127140214.png]]
To locate minutiae we can analyze the crossing number $$cn(p)=\frac{1}{2}\sum_{i=1...8}|val(p_{i\ mod\ 8})-val(p_{i-1})|$$
- p0, p1, ..., p7 are the pixels in the neighborhood for p and $val(p) \in \{0, 1\}$ is the value of pixel p
- a pixel p with $val(p) = 1$:
- is an internal point of a ridge line if $cn(p)=2$
- corresponds to a termination if $cn(p)=1$
- corresponds to a bifurcation if $cn(p)=3$
- belongs to a more complex minutia if $cn(p) > 3$.
![[Pasted image 20241127140836.png]]
- cn(p) = 2: internal point
- cn(p)=1: termination
- cn(p)=3: bifurcation
- belongs to a more complex minutia if cn(p) > 3.
a feature often used is the **ridge count**: number of ridges intersected by the segment between two points (often the points are chosen as relevant one, e.g. core and delta).
### Hybrid approach
Hybrid method based on comparison of minutiae texture: combines the representation of fingerprints based on minutiae with the representation based on Gabor filter that uses local texture information
![[Pasted image 20241127141648.png]]
##### Image alignment
- extraction of minutiae from both the input and from the template to match
- the two sets of minutiae are compared through an algorithm of point matching that preliminary selects a pair of reference minutiae and then determines the number of matching of minutiae pairs pairs using the remaining set of points
- the reference pair that produces the maximum number of matching pairs, determines the best alignment
- once minutiae have been aligned, also rotation and translation are known
- rotation parameter is the average of rotation of all the individual pairs of corresponding minutiae
- translation parameters are calculable using spatial coordinates of the pair of reference minutiae which produced the best alignment
##### Masking and tesselation
After masking the background, the images are normalized by building on them a grid that divides them into series of non-overlapping windows of the same size. Each window is normalized with reference to a constant mean and variance. Optimal size for 300 DPI is 30x30, as 30 pixel is the average distance inter-solco.
![[Pasted image 20241128000431.png]]
##### Feature extraction
In order to perform the feature extraction from the cells resulting from the tessellation, a group of 8 Gabor filters is used, all with the same frequency but variable orientation. Such filtering produces 8 sorted images for each cell.![[Pasted image 20241128000533.png]]
We consider the mean absolute deviation (average of absolute deviation of the value, from the value of a central point) of the intensity in each filtered cell as a feature, so we have 8 of them per cell (remember 8 gabor filters), they are concatenated into a characteristic vector.
The characteristic values relating to masked regions are not used and marked as missing in the vector.
**Matching:** sum of the squared differences between corresponding characteristic vectors, after discarding missing values.
The similarity score is combined with that obtained with the comparison of minutiae, using the rule of sum of the combinations.
Recognition is successful if the score is below a threshold.
### Fingerphotos
**Pros:**
- no special sensor required
- contactless (more hygienic)
**Cons:**
- subject to common problems in image processing: illumination, position with respect to the camera, blurriness, etc.
- some pre-processing steps require more sophisticated approaches.
#### An example of pre-processing
##### Image rotation
using PCA we detect the largest connected component and compute its orientation, so we can rotate the image and have the finger straight.
##### Background removal
- image is converted in grayscale
- canny edge to detect contours
- openCV findContours() to find the larger area component
- mask the rest (mask with white corresponding to the larger contour, all the rest is black)
- gaussian blur of the mask
- fusion with the original image
##### Finger cropping
- openCV extract_finger() + .rectangle() to crop the finger
##### ROI cropping
- based on height and width of the finger, crop the fingertip
##### ROI enhancement
- normalization of the image
- openCV .createCLAHE()
- openCV .gaussianBlur()
##### Ridge extraction
- adaptive thresholding
- morphological operators
- thinning
### Fake fingerprints
It's not difficult. It's possible to use materials such as gelatine, silicone or latex.
A safety measure is **liveness detection**, possible approaches:
- measure pulse and temperature
- bloodstream and its pulsation can be detected with a careful measurement of the light reflected or transmitted through the finger
- live-scan scanners with FTIR (Frustrated Total Internal Reflection) technology use a mechanism of differential acquisition for the ridges and furrows of fingerprints
- the high resolution scan of fingerprints reveals details characteristic of the structure of the pores, difficult to imitate in an artificial finger
- the characteristic color of the skin changes due to the pressure when it is pressed on the scanning surface
- the (electric) potential difference between two specific points of the finger
- electric impedance is useful to check the vitality of the finger
- a finger sweats
### AFIS or human
The problem of digital fingerprint recognition is not really fully solved! A human expert in manual techniques is still better in comparing fingerprints than an automated system.
However, automated systems are fast, reliable, consistent and low-cost.

View file

@ -0,0 +1,2 @@
Iris texture is almost completely randotipic, that's why it's useful for recognition.