Who's Tweeting? Trump or Trudeau?
Tweets are notoriously difficult, as they are shorter than most texts and usually have hard-to-parse content like hashtags, mentions, links and emoji. Despite the difficulties, tweets are fun content, so in this notebook we'll take a look at classifying two prominent North American politicians. Can we determine if it is Donald Trump or Justin Trudeau based on just a tweet? This is the Result of Project "Who's Tweeting? Trump or Trudeau?", via datacamp.
- 1. Tweet classification: Trump vs. Trudeau
- 2. Transforming our collected data
- 3. Vectorize the tweets
- 4. Training a multinomial naive Bayes model
- 5. Evaluating our model using a confusion matrix
- 6. Trying out another classifier: Linear SVC
- 7. Introspecting our top model
- 8. Bonus: can you write a Trump or Trudeau tweet?
1. Tweet classification: Trump vs. Trudeau
So you think you can classify text? How about tweets? In this notebook, we'll take a dive into the world of social media text classification by investigating how to properly classify tweets from two prominent North American politicians: Donald Trump and Justin Trudeau.
Photo Credit: Executive Office of the President of the United States
Tweets pose specific problems to NLP, including the fact they are shorter texts. There are also plenty of platform-specific conventions to give you hassles: mentions, #hashtags, emoji, links and short-hand phrases (ikr?). Can we overcome those challenges and build a useful classifier for these two tweeters? Yes! Let's get started.
To begin, we will import all the tools we need from scikit-learn. We will need to properly vectorize our data (CountVectorizer
and TfidfVectorizer
). And we will also want to import some models, including MultinomialNB
from the naive_bayes
module, LinearSVC
from the svm
module and PassiveAggressiveClassifier
from the linear_model
module. Finally, we'll need sklearn.metrics
and train_test_split
and GridSearchCV
from the model_selection
module to evaluate and optimize our model.
import random; random.seed(53)
# Import all we need from sklearn
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
from sklearn import metrics
2. Transforming our collected data
To begin, let's start with a corpus of tweets which were collected in November 2017. They are available in CSV format. We'll use a Pandas DataFrame to help import the data and pass it to scikit-learn for further processing.
Since the data has been collected via the Twitter API and not split into test and training sets, we'll need to do this. Let's use train_test_split()
with random_state=53
and a test size of 0.33, just as we did in the DataCamp course. This will ensure we have enough test data and we'll get the same results no matter where or when we run this code.
import pandas as pd
# Load data
tweet_df = pd.read_csv('./dataset/tweets_trump_trudeau.csv')
# Create target
y = tweet_df['author']
# Split training and testing data
X_train, X_test, y_train, y_test = train_test_split(tweet_df['status'], y, test_size=0.33, random_state=53)
3. Vectorize the tweets
We have the training and testing data all set up, but we need to create vectorized representations of the tweets in order to apply machine learning.
To do so, we will utilize the CountVectorizer
and TfidfVectorizer
classes which we will first need to fit to the data.
Once this is complete, we can start modeling with the new vectorized tweets!
count_vectorizer = CountVectorizer(stop_words='english', min_df=0.05, max_df=0.9)
# Create count train and test variables
count_train = count_vectorizer.fit_transform(X_train)
count_test = count_vectorizer.transform(X_test)
# Initialize tfidf vectorizer
tfidf_vectorizer = TfidfVectorizer(stop_words='english', min_df=0.05, max_df=0.9)
# Create tfidf train and test variables
tfidf_train = tfidf_vectorizer.fit_transform(X_train)
tfidf_test = tfidf_vectorizer.transform(X_test)
4. Training a multinomial naive Bayes model
Now that we have the data in vectorized form, we can train the first model. Investigate using the Multinomial Naive Bayes model with both the CountVectorizer
and TfidfVectorizer
data. Which do will perform better? How come?
To assess the accuracies, we will print the test sets accuracy scores for both models.
tfidf_nb = MultinomialNB()
# ... Train your model here ...
tfidf_nb.fit(tfidf_train, y_train)
# Run predict on your TF-IDF test data to get your predictions
tfidf_nb_pred = tfidf_nb.predict(tfidf_test)
# Calculate the accuracy of your predictions
tfidf_nb_score = metrics.accuracy_score(y_test, tfidf_nb_pred)
# Create a MulitnomialNB model
count_nb = MultinomialNB()
# ... Train your model here ...
count_nb.fit(count_train, y_train)
# Run predict on your count test data to get your predictions
count_nb_pred = count_nb.predict(count_test)
# Calculate the accuracy of your predictions
count_nb_score = metrics.accuracy_score(y_test, count_nb_pred)
print('NaiveBayes Tfidf Score: ', tfidf_nb_score)
print('NaiveBayes Count Score: ', count_nb_score)
5. Evaluating our model using a confusion matrix
We see that the TF-IDF model performs better than the count-based approach. Based on what we know from the NLP fundamentals course, why might that be? We know that TF-IDF allows unique tokens to have a greater weight - perhaps tweeters are using specific important words that identify them! Let's continue the investigation.
For classification tasks, an accuracy score doesn't tell the whole picture. A better evaluation can be made if we look at the confusion matrix, which shows the number correct and incorrect classifications based on each class. We can use the metrics, True Positives, False Positives, False Negatives, and True Negatives, to determine how well the model performed on a given class. How many times was Trump misclassified as Trudeau?
%matplotlib inline
from utils.helper_functions import plot_confusion_matrix
# Calculate the confusion matrices for the tfidf_nb model and count_nb models
tfidf_nb_cm = metrics.confusion_matrix(y_test, tfidf_nb_pred)
count_nb_cm = metrics.confusion_matrix(y_test, count_nb_pred)
# Plot the tfidf_nb_cm confusion matrix
plot_confusion_matrix(tfidf_nb_cm, classes=['Donald J. Trump', 'Justin Trudeau'], title="TF-IDF NB Confusion Matrix")
# Plot the count_nb_cm confusion matrix without overwriting the first plot
plot_confusion_matrix(count_nb_cm, classes=['Donald J. Trump', 'Justin Trudeau'], title="CountVectorizer NB Confusion Matrix", figure=1)
6. Trying out another classifier: Linear SVC
So the Bayesian model only has one prediction difference between the TF-IDF and count vectorizers -- fairly impressive! Interestingly, there is some confusion when the predicted label is Trump but the actual tweeter is Trudeau. If we were going to use this model, we would want to investigate what tokens are causing the confusion in order to improve the model.
Now that we've seen what the Bayesian model can do, how about trying a different approach? LinearSVC is another popular choice for text classification. Let's see if using it with the TF-IDF vectors improves the accuracy of the classifier!
tfidf_svc = LinearSVC()
# ... Train your model here ...
tfidf_svc.fit(tfidf_train, y_train)
# Run predict on your tfidf test data to get your predictions
tfidf_svc_pred = tfidf_svc.predict(tfidf_test)
# Calculate your accuracy using the metrics module
tfidf_svc_score = metrics.accuracy_score(y_test, tfidf_svc_pred)
print("LinearSVC Score: %0.3f" % tfidf_svc_score)
# Calculate the confusion matrices for the tfidf_svc model
svc_cm = metrics.confusion_matrix(y_test, tfidf_svc_pred)
# Plot the confusion matrix using the plot_confusion_matrix function
plot_confusion_matrix(svc_cm, classes=['Donald J. Trump', 'Justin Trudeau'], title="TF-IDF LinearSVC Confusion Matrix")
7. Introspecting our top model
Wow, the LinearSVC model is even better than the Multinomial Bayesian one. Nice work! Via the confusion matrix we can see that, although there is still some confusion where Trudeau's tweets are classified as Trump's, the False Positive rate is better than the previous model. So, we have a performant model, right?
We might be able to continue tweaking and improving all of the previous models by learning more about parameter optimization or applying some better preprocessing of the tweets.
Now let's see what the model has learned. Using the LinearSVC Classifier with two classes (Trump and Trudeau) we can sort the features (tokens), by their weight and see the most important tokens for both Trump and Trudeau. What are the most Trump-like or Trudeau-like words? Did the model learn something useful to distinguish between these two men?
from utils.helper_functions import plot_and_return_top_features
# Import pprint from pprint
from pprint import pprint
# Get the top features using the plot_and_return_top_features function and your top model and tfidf vectorizer
top_features = plot_and_return_top_features(tfidf_svc, tfidf_vectorizer)
# pprint the top features
pprint(top_features)
8. Bonus: can you write a Trump or Trudeau tweet?
So, what did our model learn? It seems like it learned that Trudeau tweets in French!
I challenge you to write your own tweet using the knowledge gained to trick the model! Use the printed list or plot above to make some inferences about what words will classify your text as Trump or Trudeau. Can you fool the model into thinking you are Trump or Trudeau?
If you can write French, feel free to make your Trudeau-impersonation tweet in French! As you may have noticed, these French words are common words, or, "stop words". You could remove both English and French stop words from the tweets as a preprocessing step, but that might decrease the accuracy of the model because Trudeau is the only French-speaker in the group. If you had a dataset with more than one French speaker, this would be a useful preprocessing step.
Future work on this dataset could involve:
- Add extra preprocessing (such as removing URLs or French stop words) and see the effects
- Use GridSearchCV to improve both your Bayesian and LinearSVC models by finding the optimal parameters
- Introspect your Bayesian model to determine what words are more Trump- or Trudeau- like
- Add more recent tweets to your dataset using tweepy and retrain
Good luck writing your impersonation tweets -- feel free to share them on Twitter!
trump_tweet = "MAKE AMERICA GREAT AGAIN!"
trudeau_tweet = "Canada les"
# Vectorize each tweet using the TF-IDF vectorizer's transform method
# Note: `transform` needs the string in a list object (i.e. [trump_tweet])
trump_tweet_vectorized = tfidf_vectorizer.transform([trump_tweet])
trudeau_tweet_vectorized = tfidf_vectorizer.transform([trudeau_tweet])
# Call the predict method on your vectorized tweets
trump_tweet_pred = tfidf_svc.predict(trump_tweet_vectorized)
trudeau_tweet_pred = tfidf_svc.predict(trudeau_tweet_vectorized)
print("Predicted Trump tweet", trump_tweet_pred)
print("Predicted Trudeau tweet", trudeau_tweet_pred)