vault backup: 2024-12-16 02:05:42

This commit is contained in:
Marco Realacci 2024-12-16 02:05:42 +01:00
parent c26179e4fe
commit 7192d14da1
17 changed files with 194 additions and 60 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

View file

@ -94,4 +94,9 @@ Questo modello è utile per migliorare l'accuratezza nel riconoscimento dell'iri
![[Pasted image 20241128102138.png]]
### NICE competition
cose
cose
> [!PDF|yellow] [[LEZIONE10_Iris recognition.pdf#page=48&color=yellow|LEZIONE10_Iris recognition, p.48]]
> > Canny filtering
>
> i punti sono considerati se adiacenti ad altri edge

View file

@ -0,0 +1,136 @@
The idea is to complement the weaknesses of a system with the strengths of another.
Examples:
- **multiple biometric traits** (e.g. signature + fingerprint, used in India, USA ecc.)
- most obvious meaning of multi biometrics
- **multiple instances:** same trait but acquired in different nuances (i.e. 2 or more different fingers, both irises, both ears, multiple instances of hand geometry...)
- **repeated instances:** same trait, same element, but acquired multiple times
- **multiple algorithms:** same trait, same element but using multiple classifiers
- exploits strengths and weaknesses
- **multiple sensors:** i.e. fingerprint with both optical and capacitive sensor
Where do the fusion happen?
It can happen
- **at sensor level:**
- not always feasible
- **at feature level:** fusing feature vectors before matching
- not always feasible: feature vectors should be comparable in nature and size
- an example is when we have multiple samples of the same traits, in this case they will be certainly comparable
- **score level fusion:** or match level fusion. Consists in fusing the scores (probability scores) or rankings
- most feasible solution
- each system works by itself
- scores need to be comparable: normalization in a common range may be required
- **decision level fusion:** separate decisions (look at slide)
![[Pasted image 20241212084256.png|500]]
#### Feature level fusion
![[Pasted image 20241212084349.png|600]]
Better results are expected, since much more information is still present
Possible problems:
- incompatible feature set
- feature vector combination may cause "curse of dimensionality"
- a more complex matcher may be required
- combined vectors may include noisy or redundant data.
##### Feature level fusion: serial
example: use SIFT (scalar invariant feature transform)
Phases:
- feature extraction (SIFT feature set)
- feature normalization: required due to the possible significant differences in the scale of the vector values
Problems to address:
- feature selection / reduction (complete with slide)
- matching
##### Feature level fusion: parallel
parallel combination of the two vectors:
- vector normalization (shorter should be extended if size is different)
- pre-processing of vectors: weighted combination through the coefficient $\theta$
- further feature processing: PCA, L-L expansion, LDA
add CCA
#### Score level fusion
![[Pasted image 20241212085003.png]]
Transformation based: scores from different matchers are first normalized in a common domain and then combined using fusion rules
Classifier based: the scores are considered as features and included into a feature vector. A further classifier is trained (can be SVM, decision tree, neural netework...)
##### Fusion rules
**Abstract:** each classifier outputs a class label
Majority vote: each classifier votes for a class
**Rank:** each classifier outputs its class rank
Borda count:
- each classifier produces a ranking (classifica) according to the probability of the pattern belonging to them
- ranking are converted in scores and summed up
- the class with the highest final score is the one chosen by the multi-classifier
es. su 4 posti disponibili, la classe più probabile ha rank 4, quella meno probabile rank 1. I rank di ogni classificatore si sommano.
Can also be used in identification open set, using a threshold to discard low scores (score is the sum of ranks)
**Measurement:** each classifier outputs its classification score
![[Pasted image 20241212090608.png|600]]
Different methods are possible (i.e. sum, weighted sum, mean, product, weighted product, max, min, ecc.)
- sum: the sum of the returned confidence vectors is computed, pattern is classified according to the highest value
Scores from different matchers are typically unhomogeneous:
- different range
- similarity vs distance
- different distributions
Normalization is required!
But there are issues to consider when choosing a normalization method:
- robustness: the transformation should not be influenced by outliers
- effectiveness: estimated parameters for the score distribution should be best approximate the real values
##### Reliability
A reliability measure for each single response of each subsystem before fusing them in a final response. Confidence margins being a possible solution.
Poh e Bengio: solution based on FAR and FRR $M(\nabla) = |FAR(\nabla)-F\mathbb{R}(\nabla)|$
#### Decision level fusion
![[Pasted image 20241212091320.png|600]]
A common way is majority voting. But also serial combination (AND) or parallel combination (OR) can be used.
Be careful when using OR: if a single classifier says ok but the other fails, it is accepted (less secure)!
#### Template updating - Co-Update method
mi sono distratto, integrare con slide
#### Data normalization
When minimum and maximum values are known, normalization is trivial.
For this reason, we assumed to **miss** an exact estimate of the maximum value.
We chose the average value in its place, in order to stress normalization functions even more.
Normalization functions:
- min/max
- $s'_{k}=\frac{s_{k}-min}{max-min}$
- z-score
-
- median/mad
-
- sigmoid
-
- tanh
![[Pasted image 20241212094046.png|300]]
The Min-max normalization technique performs a “mapping” (shifting + compression/dilation) of the interval between the minimum and maximum values in the interval between 0 and 1
![[Pasted image 20241212093902.png|200]]
![[Pasted image 20241212093927.png|200]]
![[Pasted image 20241212093943.png|200]]
![[Pasted image 20241212094000.png|200]]
![[Pasted image 20241212094016.png|200]]

View file

@ -0,0 +1 @@
score level fusion

Binary file not shown.