Category Archives: Machine Learning

Common Evaluation Measures for Classification Models

Classification is a common machine learning task. This is where we have a data set of labelled examples with which we build a model that can then be used to (hopefully accurately!) assign a class to new unlabelled examples. There are various points at which we might want to test the performance of the model. Initially we might tune parameters or hyperparameters using cross validation, then check the best performing models on the test set. If putting the model into production we may also want to test it on live data, we might even use different evaluation measures at different stages of this process. This article discusses some frequently used measures for evaluating the performance of classification models.

Continue reading

The Bias-Variance Tradeoff

There are several sources of error that can affect the accuracy of machine learning models including bias and variance. A fundamental machine learning concept is what’s known as the bias-variance tradeoff. This article discusses what’s meant by bias and variance and how trading them off against one another can affect model accuracy.

Continue reading

A Quick Introduction to Clustering

Cluster analysis more usually referred to as clustering, is a common data mining task. In clustering the goal is to divide the data set into groups so that objects in the same group are similar to one another while objects in different groups are different to one another. In other words the goal is to minimize the intra-cluster distance while maximizing the inter-cluster distance.

Continue reading

Why “computer says no” may no longer be an option.

The General Data Protection Regulation (GDPR)  is a new data protection regulation that will be effective across the EU from 25th May 2018. The GDPR applies to all companies that process data of EU citizens regardless of where the companies are based. It replaces the Directive 95/46/EC normally referred to as the Data Protection Directive which dates back to the 1990’s.

Continue reading

Quick Introduction to k-NN (part 2)

Part 2 of this intro to k-NN demonstrates an implementation of the algorithm in r. Part 1 discussed the algorithm itself.  I have chosen a data set from the UCI Machine Learning Repository to work with. I am using the Banknote Authentication data set. This data set consists of measurements of 400 x 400 pixel pictures of forged and genuine bank notes. Pictures were grey scale with a resolution of about 660 dpi. A wavelet transform tool was used to extract features from the pictures. The features used are variance, skewness and kurtosis of the wavelet transformed image and the entropy of the image. The class label is whether the bank note is genuine or not (0=no, 1=yes). k-NN should be a reasonable choice of algorithm for this data set as features are numerical and there are not too many of them in relation to the number of instances though obviously other factors (e.g. amount of noise in the data set) are important also. Continue reading

A quick introduction to k-NN (Part 1)

One of the oldest and most popular classification algorithms is nearest neighbors algorithm. It’s also one of the easiest algorithms to understand so is a good place to start when learning about data mining algorithms. Part one of this article provides a brief introduction to, and overview of, k-NN. Part two will demonstrate an implementation of it in r.

Essentially the nearest neighbors algorithm is based on the premise that the more features objects have in common the more likely they are to belong to the same class. Nearest neighbors is a non-parametric method so it is not reliant on assumptions about the underlying distribution of the data set. It is called a lazy learning method because unlike most classification algorithms it does not attempt to model the data set. Instead test cases are compared to other cases in the data set to determine their class. Continue reading