DEV Community

Dakota Day
Dakota Day

Posted on

Exploring Unsupervised Learning in ML

In my last blog I talked about machine learning(ML), its uses, and some of the different algorithmic patterns that make it work. In this blog we'll be checking out some unsupervised learning patterns and why we may want to use them.

Image description

To quickly review, unsupervised learning uses algorithms to analyze unlabeled data without our supervision. We'll look at three popular unsupervised learning patterns in this blog.

K-Means Clustering

Image description
We'll start by talking about K-means clustering. So what does the 'k' mean? The k represents the number of clusters we want in our data. K-means algorithms focus primarily on minimizing distances between data points and our clusters, in other words, data points within a distance belong to a certain cluster. This algorithm can be broken down into 5 steps:

  1. Initialization: Start with random K-points from the dataset.

  2. Assignment: For every data point, we want to calculate the distance to each K-point and assign it to the closest one.

  3. Update K-points: After assignment, we then want to recalculate the K-points position by looking at the relevant clustered data points and moving to the center of the cluster.

  4. Repeat: We want to iterate steps 2 and 3 until our K-points no longer make significant movement.

  5. Results: At the end the algorithm will output the final K points and each data point associated with them.

Some applications of K-means algorithms include image segmentation in healthcare and robotics, marketing and retail to segment customers by purchase history and demographic, fraud detection, and recommendation systems.

Principal Component Analysis(PCA)

Image description
PCA can used alongside other techniques, but its objective is to simplify data by breaking large datasets into smaller ones. It does this while still maintaining patterns and trends, aka principal components. Let's go over the steps PCA follows:

  1. Standardization: This step is to ensure that large ranges of variables do not dominate smaller ranges. Bringing all variables to scale ensures no biased results.

  2. Covariance Matrix: Here we make a matrix to compare each pair of features in the data.

  3. Eigenvalue Decomposition: Trying to keep it relatively simple, eigenvectors indicate the directions of maximum variance in the data and eigenvalues quantify the variance captured by each principal component.

  4. Selecting Principal Components: We sort eigenvalues in descending order and keep only the top n required principal components.

  5. Project the data: Now we project the original data onto the dimensions represented by the principal components selected in the last step.

Applications for PCA include visualizing multidimensional data, reducing dimensions of data and resizing images.

Autoencoders

Image description
Autoencoders a special kind of neural network that is almost self supervised. They are made with an encoder and a decoder in mind.

Encoder

The encoder compresses the input data and generates a bottleneck in the hidden layer of the neural network.

Decoder

The decoder tries to reconstruct the input, however it only has the compressed data to go off of. The output can 'improved' by calculating the reconstruction error between the input and output.

Applications for autoencoders include image and audio compression, anomaly detection, data generation, and recommendation systems.

In conclusion, unsupervised learning is helpful for picking up patterns we as humans may not notice as quickly. These algorithms are better suited for complex tasks or larger datasets.

Sources

Intro to K-Means Clustering
Principal Component Analysis
Intro to Autoencoders

Images Used

K-Means Graph
PCA Graph
Autoencoder Graph

Top comments (0)