DEV Community

Sajjad Rahman
Sajjad Rahman

Posted on

Lagrange Multipliers

Q1. Lagrange Multipliers

What condition holds at the optimum?

A. ∇f(x) = 0
B. ∇f(x) = Σ λᵢ∇gᵢ(x)
C. g(x) = 1
D. λᵢ = 0

Answer: B
👉 Gradients align at optimum


Q2. KKT Conditions

Which is TRUE?

A. αᵢ < 0
B. αᵢgᵢ(x*) = 1
C. αᵢ ≥ 0
D. Constraints are ignored

Answer: C


Q3. Hyperplane Definition

A hyperplane satisfies:

A. w·x + b = 0
B. x² + y² = 1
C. ∇x = 0
D. y = mx²

Answer: A


Q4. Role of w in SVM

The vector w determines:

A. Bias
B. Orientation of hyperplane
C. Number of classes
D. Dataset size

Answer: B


Q5. Role of b

The bias term controls:

A. Orientation
B. Distance metric
C. Position of hyperplane
D. Kernel

Answer: C


Q6. Support Vectors

Support vectors are:

A. All training points
B. Points far from boundary
C. Points on margin
D. Random points

Answer: C


Q7. Margin Maximisation

SVM maximises:

A. ||w||
B. 1 / ||w||
C. Number of features
D. Training error

Answer: B


Q8. Constraint for Correct Classification

Which is correct?

A. yᵢ(w·xᵢ + b) ≥ 1
B. w·x = 0
C. yᵢ = 0
D. xᵢ ≥ 1

Answer: A


Q9. Dual Problem Uses

The dual formulation depends on:

A. Distances
B. Inner products
C. Gradients
D. Labels only

Answer: B


Q10. Kernel Trick

What does a kernel do?

A. Reduces data size
B. Computes inner product in feature space
C. Removes noise
D. Normalises data

Answer: B


Q11. Kernel Function

k(x, y) represents:

A. Distance
B. Similarity
C. Label
D. Error

Answer: B


Q12. Soft Margin Parameter C

Large C leads to:

A. Wider margin
B. More misclassification
C. Narrow margin, fewer errors
D. No effect

Answer: C


Q13. Small C Leads To

A small C results in:

A. Narrow margin
B. Wide margin
C. Overfitting
D. No classification

Answer: B


Q14. Slack Variable ξᵢ

If ξᵢ > 1:

A. Correct classification
B. Margin violation only
C. Misclassification
D. No effect

Answer: C


Q15. Inner Product

The dot product is:

A. Σ xᵢ²
B. Σ wᵢxᵢ
C. x + w
D. ||x||

Answer: B


Q16. PCA Property

Principal components are:

A. Parallel
B. Random
C. Orthogonal
D. Identical

Answer: C


Q17. Generalisation

Good generalisation means:

A. Perfect training accuracy
B. Good performance on unseen data
C. Large dataset only
D. High variance

Answer: B


Q18. KNN Curse of Dimensionality

As dimensions increase:

A. Performance improves
B. Distance becomes more meaningful
C. Performance worsens
D. No change

Answer: C


Q19. K-Means Property

Each iteration:

A. Increases error
B. Decreases or keeps SSE same
C. Randomly changes clusters
D. Stops immediately

Answer: B


Q20. Neural Gas

Closest codevector (rank 0):

A. Moves least
B. Moves most
C. Does not move
D. Is removed

Answer: B


Top comments (0)