1. Given that the decision boundary separating two classes is linear, what can be inferred about
the discriminant functions of the two classes?
a. Both discriminant functions have to be necessarily linear
b. At least one of the discriminant functions is linear
c. Both discriminant functions can be non-linear
Ans: a
2. We discussed the concept of masking in video lectures. What are the minimum number of
basis transformations required in order to avoid masking for K classes?
a. K
b. K-1
c. K
d. K(K-1)/2
Ans: a
3. Consider the function f1(x) and f2(x) shown in the figure below
Which of the following is correct?
Ans: a: 0 < beta < alpha
4. Which of the following is correct about linear discriminant analysis ?
a. It minimizes the variance between the classes relative to the within class variance
b. It maximizes the within class variance relative to the variance between classes
c. It maximizes the variance between the classes relative to the within class variance
d. None of these
Ans: c
5. Consider the case where two classes follow Gaussian distribution which are centered at
(−1, 2) and (1, 4) and have identity covariance matrix. Which of the following is the separating decision boundary?
(a) y − x = 3
(b) x + y = 3
(c) x + y = 6
(d) (b) and (c) are possible
e) None of these
(f) Can not be found from the given information
Ans: c
6. Consider the following data with two classes. The color indicates different class.
Which of the following models (with NO additional complexity) can achieve zero training error for
classification?
a. LDA
b. PCA
c. Logistic regression
d. None of these
Ans: c
7. Which of the following technique is well known to utilize class labels in feature selection for
classification?
(a) LDA
(b) PCA (by dimensionality reduction)
(c) both (a) and (b)
(e) None of these
Ans: a
8. We discussed the use of MLE for the estimation of parameters of logistic regression model.
We used which of the following assumptions to derive the likelihood function ?
a. independence among the class labels
b. independence among each training sample
c. independence among the parameters of the model
d. None of these
Ans: b
9. Consider the following distribution of training data:
Which method would you choose for dimensionality reduction?
(a) Linear Discriminant Analysis
(b) Principal Component Analysis
(c) (a) and (b) perform very poorly, so have to choose Quadratic Discriminant Analysis
(d) (a) or (b) are equally good
(e) None of these
Ans: b
the discriminant functions of the two classes?
a. Both discriminant functions have to be necessarily linear
b. At least one of the discriminant functions is linear
c. Both discriminant functions can be non-linear
Ans: a
2. We discussed the concept of masking in video lectures. What are the minimum number of
basis transformations required in order to avoid masking for K classes?
a. K
b. K-1
c. K
d. K(K-1)/2
Ans: a
3. Consider the function f1(x) and f2(x) shown in the figure below
Which of the following is correct?
Ans: a: 0 < beta < alpha
4. Which of the following is correct about linear discriminant analysis ?
a. It minimizes the variance between the classes relative to the within class variance
b. It maximizes the within class variance relative to the variance between classes
c. It maximizes the variance between the classes relative to the within class variance
d. None of these
Ans: c
5. Consider the case where two classes follow Gaussian distribution which are centered at
(−1, 2) and (1, 4) and have identity covariance matrix. Which of the following is the separating decision boundary?
(a) y − x = 3
(b) x + y = 3
(c) x + y = 6
(d) (b) and (c) are possible
e) None of these
(f) Can not be found from the given information
Ans: c
6. Consider the following data with two classes. The color indicates different class.
Which of the following models (with NO additional complexity) can achieve zero training error for
classification?
a. LDA
b. PCA
c. Logistic regression
d. None of these
Ans: c
7. Which of the following technique is well known to utilize class labels in feature selection for
classification?
(a) LDA
(b) PCA (by dimensionality reduction)
(c) both (a) and (b)
(e) None of these
Ans: a
8. We discussed the use of MLE for the estimation of parameters of logistic regression model.
We used which of the following assumptions to derive the likelihood function ?
a. independence among the class labels
b. independence among each training sample
c. independence among the parameters of the model
d. None of these
Ans: b
9. Consider the following distribution of training data:
Which method would you choose for dimensionality reduction?
(a) Linear Discriminant Analysis
(b) Principal Component Analysis
(c) (a) and (b) perform very poorly, so have to choose Quadratic Discriminant Analysis
(d) (a) or (b) are equally good
(e) None of these
Ans: b
Thankyou
ReplyDeletePrinciples of Industrial Engineering - YouTube Video Presentation https://www.youtube.com/watch?v=pU8CdWfZZdU
ReplyDeletewhen will you post the answers of week 5 assignment?
ReplyDelete1. c
ReplyDeleteyes'
Deleteconsider the case where two classes follow gaussian distribution which are centered at (4, 7) and (−4, −1) and have identity covariance matrix. which of the following is the separating decision boundary using lda assuming the priors to be equal?
ReplyDelete