Use K-nearest Neighbors To Find Similar Datapoints with Python and Scikit-learn

InstructorHannah Davis

Share this video with your friends

Send Tweet

We’ll continue with the iris dataset to implement k-nearest neighbors (KNN), which makes predictions about data based on similarity to other data instances. We'll visualize how the KNN algorithm works by making its predictions based on its neighbors' labels. We'll also examine the confusion matrix a bit further.

KNN can be used for both classification and regression problems.

KNN is good for low dimensional data (data without too many input variables). It is not good for unbalanced data sets, and it can be computationally expensive.