In this article you will learn how to implement a simple algorithm for solving SVM from scratch.
Tldr; Support Vector Machines
The goal is to find the widest street that separates classes. The street is defined by 3 lines:
In this example we have two classes (blue = +1 and green = -1). The line in red is the decision boundary — to classify an unknown point u using the above SVM means:
- THEN blue
- THEN green
The width of the street is
That means the width of the street is inversely proportional to the magnitude of w.
Dividing w and b by 4:
So finding the widest street means we want to find a w that is as small as possible that forms a street that separates the classes.
Expanding / Retracting Rate
Notice that multiplying w and b by the same constant c doesn’t change the decision boundary but does change the width of the street. If:
- the width will expand
- the width will retract
Assuming we have a linearly separable dataset, we would impose the constraint that no data points lie in the street. This means that if we find a line that separates our classes but some points lie in the street, we should make the street more narrow. And if we find a street that separates our classes and none of them are in the street, we should expand the street.
Ideally, some special points (called support vectors) would exactly lie on the edge of the street.
Perceptron Algorithm
If the current w
and b
result in misclassification of a random point x
in our dataset, we can move the line toward x
by some amount lr
as follows:
and
If that point x
is in the street we might want to retract the street. If that point is not in the street or was classified correctly we might want to expand it.
Putting it all together
Starting with a random w and b (note: the initial values of w and b have a large impact on what line is learned through this process).
epochs = 100
lr = .05
expanding_rate = .99
retracting_rate = 1.01
for _ in range(epochs):
# pick a point from X at random
i = np.random.randint(0, len(X))
x, y = X[i], Y[i]
ypred = w[0] * x[0] + w[1] * x[1] + b
if (ypred > 0 and y > 0) or (ypred < 0 and y < 0):
# classified correctly
if ypred < 1 and ypred > -1:
# in the street / street is too wide
w = w + x * y * lr * retracting_rate
b = b + y * lr * retracting_rate
else:
# street is too narrow
w = w * expanding_rate
b = b * expanding_rate
else:
# misclassified
w = w + x * y * lr * expanding_rate
b = b + y * lr * expanding_rate
Circled in red are the points that were mislassified. Circled in yellow are the points that were correctly classified.
What if the data is not linearly separable
Looking at the problem statement again — we’re looking to maximize the width of the street subject to the constraint that points in our dataset cannot lie in the street. Mathematically we want to find:
subject to:
Which equates to minimizing:
Taking the derivative wrt w:
Which means:
So the decision boundary can be re-written as:
Updating w means updating α.
and
So, we can re-write the above algorithm as such:
epochs = 100
lr = .05
expanding_rate = .99
retracting_rate = 1.01
def predict(alpha_i, b, x):
wx = 0
for j in range(len(X)):
wx += alpha_i[j] * np.dot(X[j], x)
return wx + b
for _ in range(epochs):
# pick a point from X at random
i = np.random.randint(0, len(X))
x, y = X[i], Y[i]
ypred = predict(alpha_i, b, x)
if (ypred > 0 and y > 0) or (ypred < 0 and y < 0):
# classified correctly
if ypred < 1 and ypred > -1:
# in the street / street is too wide
alpha_i[i] += y * lr
alpha_i = alpha_i * retracting_rate
b += y * lr * retracting_rate
else:
# street is too narrow
alpha_i = alpha_i * expanding_rate
b *= expanding_rate
else:
# misclassified
alpha_i[i] += y * lr
alpha_i = alpha_i * expanding_rate
b += y * lr * expanding_rate
Kernel Trick
To find a non-linear decision boundary, we can use the “kernel trick”. That is, instead of defining an explicit mapping to a new feature space where the dataset is linearly separable we need only define what the inner product in that transformed space is. This kernel function is what defines this inner product and can replace the dot product in the predict function:
For example we can use the polynomial kernel
def polynomial(x_i, x_j, c, n):
return (np.dot(x_i, x_j) + c) ** n
def predict(alpha_i, b, x):
wx = 0
for j in range(len(X)):
wx += alpha_i[j] * polynomial(X[j], x, C, N)
return wx + b
Or using the RBF kernel
Conclusion
While other methods can solve this problem more effectively, I hope this simple algorithm helped you better understand SVMs!
Top comments (0)