Required Packages

import sys
import nltk
import sklearn
import pandas as pd
import numpy as np

Version Check

print('Python: {}'.format(sys.version))
print('NLTK: {}'.format(nltk.__version__))
print('Scikit-learn: {}'.format(sklearn.__version__))
print('Pandas: {}'.format(pd.__version__))
print('NumPy: {}'.format(np.__version__))
Python: 3.7.6 (default, Jan  8 2020, 20:23:39) [MSC v.1916 64 bit (AMD64)]
NLTK: 3.4.5
Scikit-learn: 0.22.1
Pandas: 1.0.1
NumPy: 1.18.1

Load the dataset

Now that we have ensured that our libraries are installed correctly, let's load the data set as a Pandas DataFrame. Furthermore, let's extract some useful information such as the column information and class distributions.

The data set we will be using comes from the UCI Machine Learning Repository. It contains over 5000 SMS labeled messages that have been collected for mobile phone spam research.

sms = pd.read_table('./dataset/SMSSpamCollection', header=None, encoding='utf-8')
sms.head()
0 1
0 ham Go until jurong point, crazy.. Available only ...
1 ham Ok lar... Joking wif u oni...
2 spam Free entry in 2 a wkly comp to win FA Cup fina...
3 ham U dun say so early hor... U c already then say...
4 ham Nah I don't think he goes to usf, he lives aro...
sms.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5572 entries, 0 to 5571
Data columns (total 2 columns):
 #   Column  Non-Null Count  Dtype 
---  ------  --------------  ----- 
 0   0       5572 non-null   object
 1   1       5572 non-null   object
dtypes: object(2)
memory usage: 87.2+ KB
sms[0].value_counts()
ham     4825
spam     747
Name: 0, dtype: int64

From the data summary, we can find that the SPAM message is defined as spam and non-SPAM message is defined as ham. And there are 747 spam messages in dataset.

Data-preprocess

From the label, label is defined with string type. To recognize it in model, It needs to convert it with binary values. This kind of process is called one-hot encoding. There are several ways to apply one-hot encoding:

  • use pd.get_dummies
  • use LabelEncoder in sklearn.preprocessing

In this time, we use LabelEncoder,

from sklearn.preprocessing import LabelEncoder

enc = LabelEncoder()
label = enc.fit_transform(sms[0])
print(label[:10])
print(sms[0][:10])
[0 0 1 0 0 1 0 0 1 1]
0     ham
1     ham
2    spam
3     ham
4     ham
5    spam
6     ham
7     ham
8    spam
9    spam
Name: 0, dtype: object
text = sms[1]
text[:10]
0    Go until jurong point, crazy.. Available only ...
1                        Ok lar... Joking wif u oni...
2    Free entry in 2 a wkly comp to win FA Cup fina...
3    U dun say so early hor... U c already then say...
4    Nah I don't think he goes to usf, he lives aro...
5    FreeMsg Hey there darling it's been 3 week's n...
6    Even my brother is not like to speak with me. ...
7    As per your request 'Melle Melle (Oru Minnamin...
8    WINNER!! As a valued network customer you have...
9    Had your mobile 11 months or more? U R entitle...
Name: 1, dtype: object

Now, it is time to text preprocessing. From the previous post, we've learned several text preprocess. But before apply those techniques, we need to formalize the text that need to remove special characters or numbers like phone numbers and so on. To do this, we can use regular expression(regex for short) for finding the pattern-matching. Here is some common regex form described in wikipedia.

  • ^ Matches the starting position within the string. In line-based tools, it matches the starting position of any line.

  • . Matches any single character (many applications exclude newlines, and exactly which characters are considered newlines is flavor-, character-encoding-, and platform-specific, but it is safe to assume that the line feed character is included). Within POSIX bracket expressions, the dot character matches a literal dot. For example, a.c matches "abc", etc., but [a.c] matches only "a", ".", or "c".

  • [ ] A bracket expression. Matches a single character that is contained within the brackets. For example, [abc] matches "a", "b", or "c". [a-z] specifies a range which matches any lowercase letter from "a" to "z". These forms can be mixed: [abcx-z] matches "a", "b", "c", "x", "y", or "z", as does [a-cx-z]. The - character is treated as a literal character if it is the last or the first (after the ^, if present) character within the brackets: [abc-], [-abc]. Note that backslash escapes are not allowed. The ] character can be included in a bracket expression if it is the first (after the ^) character: []abc].

  • [^ ] Matches a single character that is not contained within the brackets. For example, [^abc] matches any character other than "a", "b", or "c". [^a-z] matches any single character that is not a lowercase letter from "a" to "z". Likewise, literal characters and ranges can be mixed.

  • \$ Matches the ending position of the string or the position just before a string-ending newline. In line-based tools, it matches the ending position of any line.

  • ( ) Defines a marked subexpression. The string matched within the parentheses can be recalled later (see the next entry, \n). A marked subexpression is also called a block or capturing group. BRE mode requires ( ).

  • \n Matches what the nth marked subexpression matched, where n is a digit from 1 to 9. This construct is vaguely defined in the POSIX.2 standard. Some tools allow referencing more than nine capturing groups.

  • * Matches the preceding element zero or more times. For example, abc matches "ac", "abc", "abbbc", etc. [xyz] matches "", "x", "y", "z", "zx", "zyx", "xyzzy", and so on. (ab)* matches "", "ab", "abab", "ababab", and so on.

  • {m,n} Matches the preceding element at least m and not more than n times. For example, a{3,5} matches only "aaa", "aaaa", and "aaaaa". This is not found in a few older instances of regexes. BRE mode requires {m,n}.

If you want to test your regex form, test it here

# Replace email addresses with 'email'
processed = text.str.replace(r'^.+@[^\.].*\.[a-z]{2,}$', 'emailaddress')

# Replace URLs with 'webaddress'
processed = processed.str.replace(r'^http\://[a-zA-Z0-9\-\.]+\.[a-zA-Z]{2,3}(/\S*)?$', 'webaddress')

# Replace money symbols with 'moneysymb' (£ can by typed with ALT key + 156)
processed = processed.str.replace(r'£|\$', 'moneysymb')
    
# Replace 10 digit phone numbers (formats include paranthesis, spaces, no spaces, dashes) with 'phonenumber'
processed = processed.str.replace(r'^\(?[\d]{3}\)?[\s-]?[\d]{3}[\s-]?[\d]{4}$', 'phonenumbr')
    
# Replace numbers with 'numbr'
processed = processed.str.replace(r'\d+(\.\d+)?', 'numbr')

And it is required to remove useless characters like whitespace, punctuation and so on.

processed = processed.str.replace(r'[^\w\d\s]', ' ')

# Replace whitespace between terms with a single space
processed = processed.str.replace(r'\s+', ' ')

# Remove leading and trailing whitespace
processed = processed.str.replace(r'^\s+|\s+?$', '')

After that, we will use all lower case sentence.

processed = processed.str.lower()
processed
0       go until jurong point crazy available only in ...
1                                 ok lar joking wif u oni
2       free entry in numbr a wkly comp to win fa cup ...
3             u dun say so early hor u c already then say
4       nah i don t think he goes to usf he lives arou...
                              ...                        
5567    this is the numbrnd time we have tried numbr c...
5568                  will ü b going to esplanade fr home
5569    pity was in mood for that so any other suggest...
5570    the guy did some bitching but i acted like i d...
5571                            rofl its true to its name
Name: 1, Length: 5572, dtype: object

Then, in the previous post, we learned about stopword removing for text preprocessing. we can apply this.

from nltk.corpus import stopwords

stop_words = set(stopwords.words('english'))

processed = processed.apply(lambda x: ' '.join(term for term in x.split() if term not in stop_words))

Also, using PorterStemmer, we can extract stem of each word.

ps = nltk.PorterStemmer()

processed = processed.apply(lambda x: ' '.join(ps.stem(term) for term in x.split()))
processed
0       go jurong point crazi avail bugi n great world...
1                                   ok lar joke wif u oni
2       free entri numbr wkli comp win fa cup final tk...
3                     u dun say earli hor u c alreadi say
4                    nah think goe usf live around though
                              ...                        
5567    numbrnd time tri numbr contact u u moneysymbnu...
5568                              ü b go esplanad fr home
5569                                    piti mood suggest
5570    guy bitch act like interest buy someth els nex...
5571                                       rofl true name
Name: 1, Length: 5572, dtype: object

Then, you can see processed message is quite different from original one, since stop word removing, stemming and regular expression is applied.

Feature extraction

After preprocessing, we need to extract feature from text message. To do this, it will be necessary to tokenize each word. In this case, we will use the 1500 most common words as features.

from nltk.tokenize import word_tokenize

all_words = []

for message in processed:
    words = word_tokenize(message)
    for w in words:
        all_words.append(w)
        
all_words = nltk.FreqDist(all_words)

# Print the result
print('Number of words: {}'.format(len(all_words)))
print('Most common words: {}'.format(all_words.most_common(15)))
Number of words: 6579
Most common words: [('numbr', 2648), ('u', 1207), ('call', 674), ('go', 456), ('get', 451), ('ur', 391), ('gt', 318), ('lt', 316), ('come', 304), ('moneysymbnumbr', 303), ('ok', 293), ('free', 284), ('day', 276), ('know', 275), ('love', 266)]
word_features = [x[0] for x in all_words.most_common(1500)]

So we created the feature list, now we need to find the what features are in messages.

def find_features(message):
    words = word_tokenize(message)
    features = {}
    for word in word_features:
        features[word] = (word in words)

    return features
features = find_features(processed[0])
for key, value in features.items():
    if value == True:
        print(key)
go
got
n
great
wat
e
world
point
avail
crazi
bugi
la
cine
list(features.items())[:10]
[('numbr', False),
 ('u', False),
 ('call', False),
 ('go', True),
 ('get', False),
 ('ur', False),
 ('gt', False),
 ('lt', False),
 ('come', False),
 ('moneysymbnumbr', False)]

Finally, we made an one simple data that we can use it as an training set. We can apply same apporach in other dataset. Then, we need to split into training set and test set

messages = list(zip(processed, label))

np.random.seed(1)
np.random.shuffle(messages)

# Call find_features function for each SMS message
feature_set = [(find_features(text), label) for (text, label) in messages]
from sklearn.model_selection import train_test_split

training, test = train_test_split(feature_set, test_size=0.25, random_state=1)
print(len(training))
print(len(test))
4179
1393

Scikit-learn Classifier with NLTK

Now, we build the training and test set, we can build machine learning model in scikit-learn. We are using the following alogithms and see the performance of each ones,

  • KNearestNeighbors
  • Random Forest
  • Decision Tree
  • Logistic Regression
  • Naive Bayes
  • Support Vector Machine
from nltk.classify.scikitlearn import SklearnClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import SVC
from sklearn.metrics import classification_report, accuracy_score, confusion_matrix

names = ['K Nearest Neighbors', 'Decision Tree', 'Random Forest', 'Logistic Regression', 'SGD Classifier',
         'Naive Bayes', 'Support Vector Classifier']

classifiers = [
    KNeighborsClassifier(),
    DecisionTreeClassifier(),
    RandomForestClassifier(),
    LogisticRegression(),
    SGDClassifier(max_iter=100),
    MultinomialNB(),
    SVC(kernel='linear')
]

models = zip(names, classifiers)

for name, model in models:
    nltk_model = SklearnClassifier(model)
    nltk_model.train(training)
    accuracy = nltk.classify.accuracy(nltk_model, test)
    print("{} model Accuracy: {}".format(name, accuracy))
K Nearest Neighbors model Accuracy: 0.9454414931801867
Decision Tree model Accuracy: 0.95908111988514
Random Forest model Accuracy: 0.9813352476669059
Logistic Regression model Accuracy: 0.9834888729361091
SGD Classifier model Accuracy: 0.9806173725771715
Naive Bayes model Accuracy: 0.9856424982053122
Support Vector Classifier model Accuracy: 0.9820531227566404

From the result, most of models can get almost 95~98% accuracy. But we can also enhance our model to voting the best model from the result, the one of ensemble approach. To do this, we need to use VotingClassifier from sklearn.ensemble. You can find the details of Voting Classifier here.

from sklearn.ensemble import VotingClassifier

# Since VotingClassifier can accept list type of models
models = list(zip(names, classifiers))

nltk_ensemble = SklearnClassifier(VotingClassifier(estimators=models, voting='hard', n_jobs=-1))
nltk_ensemble.train(training)
accuracy = nltk.classify.accuracy(nltk_ensemble, test)
print("Voting Classifier model Accuracy: {}".format(accuracy))
Voting Classifier model Accuracy: 0.9842067480258435

We are done. We can generate the confusion matrix, one of the metrics to check classification performance.

text_features, labels = zip(*test)
prediction = nltk_ensemble.classify_many(text_features)
print(classification_report(labels, prediction))
              precision    recall  f1-score   support

           0       0.99      1.00      0.99      1199
           1       0.98      0.91      0.94       194

    accuracy                           0.98      1393
   macro avg       0.98      0.95      0.97      1393
weighted avg       0.98      0.98      0.98      1393

Also we can see the confusion matrix as an DataFrame format (more fancy I guess)

pd.DataFrame( confusion_matrix(labels, prediction),
             index=[['actual', 'actual'], ['ham', 'spam']],
             columns = [['predicted', 'predicted'], ['ham', 'spam']])
predicted
ham spam
actual ham 1195 4
spam 18 176

Summary

From this post, we made an SMS spam filter from given SMS dataset. In order to do this, we need preprocess text(seen from previous post like toknization, stemming, stop words removing and so on) and feature extraction to make dataset. NLTK is great tool to do it and it helps to train the model with SklearnClassifier wrapper. After that, we finally made SMS spam filter with Voting Method(one of ensemble approach) that has almost 98% accuracy.