Clustering in Real World
A Summary of lecture "Cluster Analysis in Python", via datacamp
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib.image as img
r = []
g = []
b = []
# Read batman image and print dimensions
batman_image = img.imread('./dataset/batman.jpg')
print(batman_image.shape)
# Store RGB values of all pixels in lists r, g, and b
for row in batman_image:
for temp_r, temp_g, temp_b in row:
r.append(temp_r)
g.append(temp_g)
b.append(temp_b)
 Preprocess
from scipy.cluster.vq import whiten
batman_df = pd.DataFrame({'red':r, 'blue':b, 'green':g})
batman_df['scaled_red'] = whiten(batman_df['red'])
batman_df['scaled_blue'] = whiten(batman_df['blue'])
batman_df['scaled_green'] = whiten(batman_df['green'])
from scipy.cluster.vq import kmeans
distortions = []
num_clusters = range(1, 7)
# Create a list of distortions from the kmeans function
for i in num_clusters:
cluster_centers, distortion = kmeans(batman_df[['scaled_red', 'scaled_blue', 'scaled_green']], i)
distortions.append(distortion)
# Create a data frame with two lists, num_clusters and distortions
elbow_plot = pd.DataFrame({'num_clusters': num_clusters, 'distortions': distortions})
# Create a line plot of num_clusters and distortions
sns.lineplot(x='num_clusters', y='distortions', data=elbow_plot);
plt.xticks(num_clusters);
colors = []
# Get standard deviations of each color
r_std, g_std, b_std = batman_df[['red', 'green', 'blue']].std()
for cluster_center in cluster_centers:
scaled_r, scaled_g, scaled_b = cluster_center
# Convert each standardized value to scaled value
colors.append((
scaled_r * r_std / 255.0,
scaled_g * g_std / 255.0,
scaled_b * b_std / 255.0
)
)
# Display colors of cluster centers
plt.imshow([colors])
Document clustering
 Document clustering: concepts

 Clean data before processing

 Determine the importance of the terms in a document (in tfidf matrix)

 Cluster the tfidf matrix

 Find top terms, documents in each cluster

 TFIDF (Term Frequency  Inverse Document Frequency)
 A weighted measure: evaluate how important a word is to a document in a collection
 Top terms per cluster
 Cluster centers: lists with a size equal to the number of terms
 Each value in the cluster center is its importance
 More considerations
 Work with hyperlinks, emoticons etc.
 Normalize words (e.g. run, ran, running > run)

.todense()
may not work with large datasets
TFIDF of movie plots
Let us use the plots of randomly selected movies to perform document clustering on. Before performing clustering on documents, they need to be cleaned of any unwanted noise (such as special characters and stop words) and converted into a sparse matrix through TFIDF of the documents.
Use the TfidfVectorizer
class to perform the TFIDF of movie plots stored in the list plots
. The remove_noise()
function is available to use as a tokenizer
in the TfidfVectorizer
class. The .fit_transform()
method fits the data into the TfidfVectorizer
objects and then generates the TFIDF sparse matrix.
Note: It takes a few seconds to run the .fit_transform()
method.
 Preprocess
movie = pd.read_csv('./dataset/movies_plot.csv')
movie.head()
plots = movie['Plot'].values
from nltk.tokenize import word_tokenize
import re
import nltk
nltk.download('punkt')
def remove_noise(text, stop_words = []):
tokens = word_tokenize(text)
cleaned_tokens = []
for token in tokens:
token = re.sub('[^AZaz09]+', '', token)
if len(token) > 1 and token.lower() not in stop_words:
# Get lowercase
cleaned_tokens.append(token.lower())
return cleaned_tokens
from sklearn.feature_extraction.text import TfidfVectorizer
# Initialize TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(min_df=0.1, max_df=0.75, max_features=50, tokenizer=remove_noise)
# Use the .fit_transform() on the list plots
tfidf_matrix = tfidf_vectorizer.fit_transform(plots)
Top terms in movie clusters
Now that you have created a sparse matrix, generate cluster centers and print the top three terms in each cluster. Use the .todense()
method to convert the sparse matrix, tfidf_matrix
to a normal matrix for the kmeans()
function to process. Then, use the .get_feature_names()
method to get a list of terms in the tfidf_vectorizer
object. The zip() function in Python joins two lists.
With a higher number of data points, the clusters formed would be defined more clearly. However, this requires some computational power, making it difficult to accomplish in an exercise here.
num_clusters = 2
# Generate cluster centers through the kmeans function
cluster_centers, distortion = kmeans(tfidf_matrix.todense(), num_clusters)
# Generate terms from the tfidf_vectorizer object
terms = tfidf_vectorizer.get_feature_names()
for i in range(num_clusters):
# Sort the terms and print top 3 terms
center_terms = dict(zip(terms, list(cluster_centers[i])))
sorted_terms = sorted(center_terms, key=center_terms.get, reverse=True)
print(sorted_terms[:3])
Basic checks on clusters
In the FIFA 18 dataset, we have concentrated on defenders in previous exercises. Let us try to focus on attacking attributes of a player. Pace (pac
), Dribbling (dri
) and Shooting (sho
) are features that are present in attack minded players. In this exercise, kmeans clustering has already been applied on the data using the scaled values of these three attributes. Try some basic checks on the clusters so formed.
 Preprocess
fifa = pd.read_csv('./dataset/fifa_18_sample_data.csv')
fifa.head()
fifa['scaled_pac'] = whiten(fifa['pac'])
fifa['scaled_dri'] = whiten(fifa['dri'])
fifa['scaled_sho'] = whiten(fifa['sho'])
from scipy.cluster.vq import vq
cluster_centers, _ = kmeans(fifa[['scaled_pac', 'scaled_dri', 'scaled_sho']], 3)
fifa['cluster_labels'], _ = vq(fifa[['scaled_pac', 'scaled_dri', 'scaled_sho']], cluster_centers)
print(fifa.groupby('cluster_labels')['ID'].count())
# Print the mean value of wages in each cluster
print(fifa.groupby('cluster_labels')['eur_wage'].mean())
fifa['scaled_def'] = whiten(fifa['def'])
fifa['scaled_phy'] = whiten(fifa['phy'])
scaled_features = ['scaled_pac', 'scaled_sho', 'scaled_pac', 'scaled_dri', 'scaled_def', 'scaled_phy']
cluster_centers, _ = kmeans(fifa[scaled_features], 2)
# Assign cluster labels and print cluster centers
fifa['cluster_labels'], _ = vq(fifa[scaled_features], cluster_centers)
print(fifa.groupby('cluster_labels')[scaled_features].mean())
# Plot cluster centers to visualize clusters
fifa.groupby('cluster_labels')[scaled_features].mean().plot(legend=True, kind='bar')
# Get the name column of first 5 players in each cluster
for cluster in fifa['cluster_labels'].unique():
print(cluster, fifa[fifa['cluster_labels'] == cluster]['name'].values[:5])