Unsupervised Learning
So far, we’ve only covered supervised learning. Data points occurs in pairs, , an input and its corresponding output label
Labelled data however, is very expensive to obtain. Often, someone has to manually label these data.
Unlabelled data on the other hand, is much cheaper. A goal then is harness the unlabelled data to help make better predictions.
The goal of unsupervised learning then is to automatically discover the rich latent structure that data often has.
Types of unsupervised learning#
We have two main categories
- Clustering
- Dimensionality reduction
This article will focus on clustering exclusively.
Clustering#
In clustering, our input is to be converted to , where is the cluster associate with point .
K-means Clustering#
K-means is a particular method for performing clustering which is based on associating each cluster with a centroid
The goal is to assign each point to a cluster and to place the centroid for that cluster so that ech point is closest to centroid
To clarify some of the terminology:
- Centroid:
- Clusters:
- Point:
- Cluster label for point :
As usual, we have an objective function that we are optimizing for.
It can be written as such:
How then, do we optimize our function?
Gradient descent cannot be used here because the value we are taking minimum with respects to a discrete value - cluster.
Dynamic programming cannot be used either because of there is a lack of overlapping sub-problems - distance of points from cluster’s centroid are independent.
The solution then, is to alternate between assigning points to centroids, and centroid to points.
Algorithm for k-means:
Problems with K-means clustering: Mitigated with repeated runs and k-means ++#
K-means is guaranteed to converge to a local minimum, but it is not guaranteed to find the global minimum.
To solve that, we can simply run K-means several times from multiple random centroid, , initializations.
We will then choose the solution with the lowest lost.
Another solution is to apply K-means++. K-means++ is a heuristic which initializes centroids on training points so that they tend to be distant from one another.
Extending K-means clustering#
Sometime, the examples are not completely unlabelled.
In those cases, we can apply constraints to our k-means algorithm to ensure that points meant to be grouped together are, and those that are not, do not get grouped together.
This results in COP K-means (constrained k-means)
The pseudo-code is as follows:
def copKMeans(points):
# initialize µ1 · · · µk to be randomly selected pairs
for each point i:
assign i to closest cluster µj such that violateConstraints(i,µj,S) = False
if no cluster is found, fail and exit
for each cluster µj:
update its center as the average of all the points in the cluster.
repeat the assign i and cluster average steps until convergence
return µ1 · · · µk
def violateConstraints(point,cluster,setOfMustHave):
for each pair of points (point, j) in setOfMustHave
if j not in cluster:
return True # constraint violated
return False # constraint not violated
Conclusion#
Unsupervised learning wraps up the tour of reflex models.
The main goal is not about the specific loss functions, or linear predictors.
Instead, the goal is to think in terms of the model, learn, and inference paradigm.
The first is modelling. There’s two parts to that.
Firstly, Feature extraction . Domain knowledge is applied to induce certain hypothesis class .
Secondly, a predictor is chosen. Either linear predictors or neural networks are employed here.
Loss functions then ties modelling to learning. While optimization is important, they can only happen when we come up with the right model choice.
Learning can only work when there’s a common core that cuts past all the idiosyncrasies of the examples. This is exactly what features are meant to capture.
Broadly speaking, machine learning is about learning predictors from some input to some output.