vault backup: 2024-12-31 01:47:41
This commit is contained in:
parent
9fc8ae474f
commit
c4a223f25d
2 changed files with 60 additions and 14 deletions
|
@ -41,18 +41,37 @@ example: use SIFT (scalar invariant feature transform)
|
|||
Phases:
|
||||
- feature extraction (SIFT feature set)
|
||||
- feature normalization: required due to the possible significant differences in the scale of the vector values
|
||||
- si crea un vettore solo composto dai due feature vector
|
||||
|
||||
Problems to address:
|
||||
- feature selection / reduction (complete with slide)
|
||||
- matching
|
||||
- **feature selection / reduction**
|
||||
- è più efficiente scegliere poche feature rispetto all'intero vettore, si possono usare tecniche come
|
||||
- **clutering k-means** mantenendo solo i centri dei cluster
|
||||
- performed after linking the two normalized vectors
|
||||
- **neighborhood elimination**
|
||||
- points at a certain distance are eliminated
|
||||
- performed before linking, on the single vectors
|
||||
- **points belonging to specific regions**
|
||||
- only points in specific regions of the train (e.g. face, nose, mouth...) are maintained
|
||||
- **matching**
|
||||
- **point pattern matching**
|
||||
- method to find the number of paired "points" between the probe vector and the gallery one
|
||||
- two points are paired if their distance is smaller than a threshold
|
||||
|
||||
##### Feature level fusion: parallel
|
||||
parallel combination of the two vectors:
|
||||
- vector normalization (shorter should be extended if size is different)
|
||||
- pre-processing of vectors: weighted combination through the coefficient $\theta$
|
||||
- further feature processing: PCA, L-L expansion, LDA
|
||||
- **vector normalization**
|
||||
- shorter vector is extended to match the size of the other one
|
||||
- e.g. zero-padding
|
||||
- **pre-processing of vectors
|
||||
- step 1: transform vectors in unitary vectors (dividing them by their L2 norm)
|
||||
- step 2: weighted combination through the coefficient $\theta$, based on the lenght of X and Y
|
||||
- we can then use X as the real part and Y as the imaginary part of the final vector
|
||||
- **further feature processing:*
|
||||
- using linear techniques like PCA, L-L expansion, LDA
|
||||
|
||||
add CCA
|
||||
##### Feature level fusion: CCA
|
||||
The idea is to find a pair of transformations that maximizes the correlation between characteristics
|
||||
|
||||
#### Score level fusion
|
||||
![[Pasted image 20241212085003.png]]
|
||||
|
@ -124,13 +143,24 @@ Normalization functions:
|
|||
![[Pasted image 20241212094046.png|300]]
|
||||
|
||||
The Min-max normalization technique performs a “mapping” (shifting + compression/dilation) of the interval between the minimum and maximum values in the interval between 0 and 1
|
||||
Pro: range tra 0 e 1
|
||||
Contro: bisogna conoscere minimo e massimo dello score di ogni sottosistema
|
||||
![[Pasted image 20241212093902.png|200]]
|
||||
|
||||
Standardizzazione per media e varianza, ampiamente usato
|
||||
contro: non porta lo score in un range fisso
|
||||
![[Pasted image 20241212093927.png|200]]
|
||||
|
||||
median/MAD: si sottrae la mediana e si divide per la mediana dei valori assoluti
|
||||
funziona male se la distribuzione degli score non è gaussiana. Non preserva la distribuzione originale e non garantisce nemmeno un range fisso :/
|
||||
![[Pasted image 20241212093943.png|200]]
|
||||
|
||||
Sigmoide: porta nell'intervallo aperto (0, 1)
|
||||
contro 1: verso gli estremi distorce parecchio
|
||||
contro 2: dipende dai parametri k e c che dipendono a sua volta dalla distribuzione degli score
|
||||
![[Pasted image 20241212094000.png|200]]
|
||||
|
||||
Tanh: garantisce range (0, 1)
|
||||
contro: tende a concentrare eccessivamente i valori verso il centro (0.5).
|
||||
![[Pasted image 20241212094016.png|200]]
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue