vault backup: 2024-12-30 01:50:33

This commit is contained in:
Marco Realacci 2024-12-30 01:50:33 +01:00
parent c2d5c0e713
commit 9fc8ae474f
17 changed files with 128 additions and 66 deletions

View file

@ -16,7 +16,7 @@
"file": "Biometric Systems/slides/Biometric_System___Notes.pdf",
"page": 17,
"left": -85,
"top": 424,
"top": 428,
"zoom": 1.5
},
"icon": "lucide-file-text",
@ -27,16 +27,14 @@
"id": "dee6b7fc799ba9d4",
"type": "leaf",
"state": {
"type": "pdf",
"type": "markdown",
"state": {
"file": "Biometric Systems/slides/LEZIONE4_Face introduction and localization.pdf",
"page": 25,
"left": -373,
"top": 303,
"zoom": 0.8
"file": "Foundation of data science/notes/3.1 Multi Class Logistic Regression.md",
"mode": "source",
"source": false
},
"icon": "lucide-file-text",
"title": "LEZIONE4_Face introduction and localization"
"icon": "lucide-file",
"title": "3.1 Multi Class Logistic Regression"
}
},
{
@ -48,55 +46,15 @@
"file": "Biometric Systems/slides/LEZIONE9_Ear recognition.pptx.pdf",
"page": 17,
"left": -153,
"top": 848,
"top": 845,
"zoom": 1.3
},
"icon": "lucide-file-text",
"title": "LEZIONE9_Ear recognition.pptx"
}
},
{
"id": "fac915dd40926979",
"type": "leaf",
"state": {
"type": "pdf",
"state": {
"file": "Biometric Systems/slides/LEZIONE8_Face antispoofing.pdf",
"page": 33,
"left": -226,
"top": 521,
"zoom": 1
},
"icon": "lucide-file-text",
"title": "LEZIONE8_Face antispoofing"
}
},
{
"id": "fdcc02f6bb8bc429",
"type": "leaf",
"state": {
"type": "image",
"state": {
"file": "Biometric Systems/images/Pasted image 20241228171617.png"
},
"icon": "lucide-image",
"title": "Pasted image 20241228171617"
}
},
{
"id": "2d7d1ebb465ffaef",
"type": "leaf",
"state": {
"type": "image",
"state": {
"file": "Biometric Systems/images/Pasted image 20241228174722.png"
},
"icon": "lucide-image",
"title": "Pasted image 20241228174722"
}
}
],
"currentTab": 5
"currentTab": 1
}
],
"direction": "vertical"
@ -266,44 +224,44 @@
"companion:Toggle completion": false
}
},
"active": "2d7d1ebb465ffaef",
"active": "dee6b7fc799ba9d4",
"lastOpenFiles": [
"Foundation of data science/notes/4 L1 and L2 normalization - Lasso and Ridge.md",
"Foundation of data science/notes/3 Logistic Regression.md",
"Biometric Systems/slides/LEZIONE4_Face introduction and localization.pdf",
"Biometric Systems/slides/Biometric_System___Notes.pdf",
"Biometric Systems/slides/LEZIONE9_Ear recognition.pptx.pdf",
"Foundation of data science/slides/multiclass_crossentropy_biasvariance.pdf",
"Foundation of data science/notes/3.1 Multi Class Logistic Regression.md",
"Foundation of data science/slides/binary_classification.pdf",
"Foundation of data science/slides/FDS_linear_regression_w_notes.pdf",
"Foundation of data science/slides/IP CV Basics.pdf",
"Foundation of data science/slides/FDS_intro_new.pdf",
"Foundation of data science/slides/Variational Autoencoders.pdf",
"Foundation of data science/slides/Traditional discriminative approaches.pdf",
"Biometric Systems/images/Pasted image 20241228171617.png",
"Biometric Systems/images/Pasted image 20241228174722.png",
"Biometric Systems/notes/4. Face detection.md",
"Biometric Systems/notes/6. Face recognition 2D.md",
"Biometric Systems/notes/11. Fingerprints.md",
"Biometric Systems/notes/8 Face anti spoofing.md",
"Biometric Systems/slides/LEZIONE11_Fingerprints.pdf",
"Biometric Systems/slides/Biometric_System___Notes.pdf",
"Biometric Systems/slides/LEZIONE4_Face introduction and localization.pdf",
"Biometric Systems/slides/LEZIONE9_Ear recognition.pptx.pdf",
"Biometric Systems/slides/LEZIONE8_Face antispoofing.pdf",
"Biometric Systems/notes/7. Face recognition 3D.md",
"Biometric Systems/slides/LEZIONE8_Face antispoofing 1.pdf",
"Biometric Systems/notes/12. Iris recognition.md",
"Biometric Systems/notes/13. Multi biometric.md",
"Biometric Systems/slides/LEZIONE5_NEW_More about face localization.pdf",
"Biometric Systems/notes/2. Performance indexes.md",
"Biometric Systems/notes/3. Recognition Reliability.md",
"Biometric Systems/notes/9. Ear recognition.md",
"Biometric Systems/slides/LEZIONE3_Affidabilita_del_riconoscimento.pdf",
"Biometric Systems/slides/LEZIONE2_Indici_di_prestazione.pdf",
"Biometric Systems/slides/Riassunto_2021_2022.pdf",
"Foundation of data science/notes/9 Random Forest.md",
"Foundation of data science/notes/9 Decision tree.md",
"Foundation of data science/notes/9 Gradient Boosting.md",
"Foundation of data science/notes/8 Variational Autoencoders.md",
"Foundation of data science/notes/7 Autoencoders.md",
"Foundation of data science/notes/4 L1 and L2 normalization - Lasso and Ridge.md",
"Biometric Systems/images/Pasted image 20241217025904.png",
"Biometric Systems/images/Pasted image 20241217030157.png",
"Foundation of data science/notes/1 CV Basics.md",
"Foundation of data science/notes/5 Neural Networks.md",
"Foundation of data science/notes/6 PCA.md",
"Foundation of data science/notes/3.2 LLM generated from notes.md",
"Foundation of data science/notes/3.1 Multi Class Logistic Regression.md",
"Foundation of data science/notes/3 Logistic Regression.md",
"Foundation of data science/notes/2 Linear Regression.md",
"Biometric Systems/notes/dati da considerare.md",
"Biometric Systems/notes/multi bio.md",

View file

@ -107,6 +107,6 @@ Possiamo assumere che tra le due c'è una relazione definita come: $$Y = f(X) +
Possiamo stimare un modello $\hat{f}(X)$ di $f(X)$ usando la linear regression o altre tecniche. In questo caso il nostro errore atteso (expected squared prediction error) sarà:
$$\text{Err}(x) = E[(Y - \hat{f}(x))^2]$$
L'errore di previsione al punto x può essere scomposto in bias, varianza e errore irriducibile: $$\text{Err}(x) = \left(E[\hat{f}(x)] - f(x)\right)^2 + E\left[(\hat{f}(x) - E[\hat{f}(x)])^2\right] + \sigma^2_\epsilon$$$$\text{Err}(x) = \text{Bias}^2 + \text{Variance} + \text{Irreducible Error}$$
L'errore irriducibile rappresenta il rumore nel modello reale, che non può essere ridotto da alcun modello. In situazioni reali, esiste un tradeoff tra la minimizzazione del bias e quella della varianza.
L'errore irriducibile rappresenta il rumore nel modello reale, che non può essere ridotto da alcun modello. In situazioni reali, esiste un trade-off tra la minimizzazione del bias e quella della varianza.
![[Pasted image 20241029125726.png]]

View file

@ -0,0 +1,104 @@
The aliasing effect on a downsampled image can be reduced using which of the following methods on the original image? *
possible answers:
reducing the high-frequency components of image by sharpening the image
increasing the high-frequency components of image by sharpening the image
increasing the high-frequency components of image by blurring the image
reducing the high-frequency components of image by blurring the image
Which of the following operation is done on the pixels in sharpening the image, in the spatial domain? *
possible answers:
Differentiation
Mean
Meadian
Integration
Consider a binary classifier that yields for 5 samples the probability scores: 0.9, 0.6, 0.5, 0.2, 0.05 The corresponding correct labels of the samples are 1, 0, 1, 0, 1. What is the threshold that yields the best precision if the application requires a recall larger than or equal to 0.3? *
possible answers:
0.95
0.8
0.55
0.3
0.1
0.03
With reference to the above problem, what is the largest achievable precision if the application requires a recall larger than or equal to 0.3?
*
possible answers:
1
0.95
0.8
0.55
0.3
0.1
0.03
With reference to the above problem, what is the value of precision and recall if the score threshold is set to 0.3?
*
possible answers:
P=1.0, R=1.0
P=0.667, R=0.667
P=0.667, R=1.0
P=0.333, R=0.333
P=0.333, R=0.667
The relationship between number of beers consumed (x) and blood alcohol content (y) was studied in 16 male college students by using least squares regression. The following regression equation was obtained from this study: y= -0.0127 + 0.0180x The above equation implies that: *
possible answers:
each beer consumed increases blood alcohol by 1.27%
on average it takes 1.8 beers to increase blood alcohol content by 1%
each beer consumed increases blood alcohol by an average of amount of 1.8%
each beer consumed increases blood alcohol by exactly 0.018
Let us consider multivariate linear regression, with n the number of features, m the number of data samples, Theta the vector of parameters. Which of the following is likely true? *
possible answers:
Optimizing by gradient descent yields n local maxima
The dimensionality of Theta is m+1, if one considers additionally the bias/offset term
Feature scaling aids the model optimization only if n>1 d
One may device new features by considering the multiplication of pairs of the original features only if the product is non zero
What do large values of the log-likelihood value indicate? *
possible answers:
That there are a greater number of explained vs. unexplained observations
That the model fits the data well
That as the predictor variable increases, the likelihood of the outcome occurring decreases
That the model is a poor fit of the data
With reference to neural network classifiers, which of the following is correct? *
possible answers:
(Fully-connected) Neural network classifiers are generative models
(Fully-connected) Neural network classifiers are linear classifiers, irrespective of the number of layers
Neural network classifiers leverage a softmax function to convert class scores to normalized probabilities
The procedure of gradient check leverages the analytic computing of gradients in large networks
Consider the neural network that corresponds to this equation: loss = (ReLU((x*y)+z) - t)^2. Assume ^2 means to the power of 2, and consider that x=1, y=2, z=1, t=4. What is the loss value? *
possible answers:
1
-1
-2
2
With reference to the above problem, what is the derivative of the loss with respect to t? *
possible answers:
1
-1
-2
2
Consider Convolutional Neural Networks (ConvNets). Which of the following is correct? *
possible answers:
ConvNets leverage fully connected layers to reduce the amount of computation and parameters
In ConvNets, activation features remain local throughout the network, due to the limited kernel sizes
The number of activation maps after each layer depends on the input number of channels
Max pooling is applied separately at each channel activation map
Consider Self-Attention Mechanisms in Transformer models. Which of the following is correct?
*
possible answers:
Self-attention allows the model to weigh the importance of different input tokens independently.
Self-attention is computationally efficient for long sequences compared to 1-D Convolutional neural networks.
Self-attention layers require sequential processing of input tokens.
Self-attention only considers the local context of each token.
Consider Principal Component Analysis (PCA). Which of the following is a correct statement about its limitations?
*
possible answers:
PCA is highly effective in handling nonlinear relationships between features.
PCA is robust to outliers and noisy data.
PCA can be computationally expensive for high-dimensional datasets.
PCA can be used to identify the underlying causal relationships between features.
Which of the following is a common choice for the prior distribution in a Variational Autoencoder (VAE)?
*
possible answers:
Normal distribution: Its simplicity and analytical tractability make it a popular choice.
Uniform distribution: It ensures that all latent space points are equally likely, preventing mode collapse.
Exponential distribution: It can capture the inherent skewness in certain types of data.
Beta distribution: It is suitable for modeling probabilities and proportions.