When approaching a machine learning task, have you ever felt overwhelmed by the large number of features?

Most data scientists experience this overwhelming challenge every day. While adding features enriches the data, it often slows down the training process and makes detecting hidden patterns more difficult, leading to the infamous curse of dimensionality (within) .

Furthermore, in high-dimensional space, a surprising phenomenon occurs. To describe this concept with an analogy, think of the novel Flatland, where characters living in a flat (2D) world feel overwhelmed when encountering a 3D creature. Similarly, we struggle to understand that, in high-dimensional space, most points are outliers and the distances between points are often larger than expected. All of these phenomena, if not handled properly, can have disastrous implications for our machine learning models.

In this post, I will explain some advanced dimensionality reduction techniques used to mitigate this issue.

In my previous post, I introduced the relevance of dimensionality reduction in machine learning problems and how to tame the curse of dimensionality, and I explained both the theory and implementation of the principal component analysis algorithm in Scikit-learn.

Following this, I will delve into additional dimensionality reduction algorithms, such as KPCA or LLE, to overcome the limitations of PCA.

Don't worry if you haven't read my Introduction to Dimensionality Reduction. This post is a standalone guide as I will explain each concept in detail using simple terms. However, if you want to learn more about PCA, I assure you this guide will serve your purpose:

Users who liked