Basic features and readability scores
Learn to compute basic features such as number of words, number of characters, average word length and number of special characters (such as Twitter hashtags and mentions). You will also learn to compute readability scores and determine the amount of education required to comprehend a piece of text. This is the Summary of lecture "Feature Engineering for NLP in Python", via datacamp.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (8, 8)
df1 = pd.read_csv('./dataset/FE_df1.csv')
print(df1.columns)
# Perform one-hot encoding
df1 = pd.get_dummies(df1, columns=['feature 5'])
# Print the new features of df1
print(df1.columns)
# Print first five rows of df1
print(df1.head())
Character count of Russian tweets
In this exercise, you have been given a dataframe tweets
which contains some tweets associated with Russia's Internet Research Agency and compiled by FiveThirtyEight.
Your task is to create a new feature 'char_count'
in tweets which computes the number of characters for each tweet. Also, compute the average length of each tweet. The tweets are available in the content
feature of tweets
.
Be aware that this is real data from Twitter and as such there is always a risk that it may contain profanity or other offensive content (in this exercise, and any following exercises that also use real Twitter data).
tweets = pd.read_csv('./dataset/russian_tweets.csv')
tweets.head()
tweets['char_count'] = tweets['content'].apply(len)
# Print the average character count
print(tweets['char_count'].mean())
Notice that the average character count of these tweets is approximately 104, which is much higher than the overall average tweet length of around 40 characters. Depending on what you're working on, this may be something worth investigating into. For your information, there is research that indicates that fake news articles tend to have longer titles! Therefore, even extremely basic features such as character counts can prove to be very useful in certain applications.
Word count of TED talks
ted
is a dataframe that contains the transcripts of 500 TED talks. Your job is to compute a new feature word_count
which contains the approximate number of words for each talk. Consequently, you also need to compute the average word count of the talks. The transcripts are available as the transcript
feature in ted
.
In order to complete this task, you will need to define a function count_words
that takes in a string as an argument and returns the number of words in the string. You will then need to apply this function to the transcript
feature of ted
to create the new feature word_count
and compute its mean.
ted = pd.read_csv('./dataset/ted.csv')
ted.head()
def count_words(string):
# Split the string into words
words = string.split()
# Return the number of words
return len(words)
# Create a new feature word_count
ted['word_count'] = ted['transcript'].apply(count_words)
# Print the average word count of the talks
print(ted['word_count'].mean())
You now know how to compute the number of words in a given piece of text. Also, notice that the average length of a talk is close to 2000 words. You can use the word_count
feature to compute its correlation with other variables such as number of views, number of comments, etc. and derive extremely interesting insights about TED.
Hashtags and mentions in Russian tweets
Let's revisit the tweets dataframe containing the Russian tweets. In this exercise, you will compute the number of hashtags and mentions in each tweet by defining two functions count_hashtags()
and count_mentions()
respectively and applying them to the content feature of tweets.
def count_hashtags(string):
# Split the string into words
words = string.split()
# Create a list of words that are hashtags
hashtags = [word for word in words if word.startswith('#')]
# Return number of hashtags
return(len(hashtags))
# Create a feature hashtag_countand display distribution
tweets['hashtag_count'] = tweets['content'].apply(count_hashtags)
tweets['hashtag_count'].hist();
plt.title('Hashtag count distribution');
def count_mentions(string):
# Split the string into words
words = string.split()
# Create a list of words that are mentions
mentions = [word for word in words if word.startswith('@')]
# Return number of mentions
return(len(mentions))
# Create a feature mention_count and display distribution
tweets['mention_count'] = tweets['content'].apply(count_mentions)
tweets['mention_count'].hist();
plt.title('Mention count distribution');
You now have a good grasp of how to compute various types of summary features. In the next lesson, we will learn about more advanced features that are capable of capturing more nuanced information beyond simple word and character counts.
Readability tests
- Readability test
- Determine readability of an English passage
- Scale ranging from primary school up to college graduate level
- A mathematical formula utilizing word, syllable and sentence count
- Used in fake news and opinion spam detection
- Examples
- Flesch reading ease
- Gunning fog index
- Simple Measure of Gobbledygook (SMOG)
- Dale-Chall score
- Flesch reading ease
- One of the oldest and most widely used tests
- Dependent on two factors
- Greater the average sentence length, harder the text is to read
- Greater the average number of syllables in a word, harder the text is to read
- Higher the score, greater the readability
- Gunning fog index
- Developed in 1954
- Also dependent on average sentence length
- Greater the percentage of complex words, harder the text is to read
- Higher the index, lesser the readability
with open('./dataset/sisyphus_essay.txt', 'r') as f:
sisyphus_essay = f.read()
sisyphus_essay[:100]
from textatistic import Textatistic
# Compute the readability scores
readability_scores = Textatistic(sisyphus_essay).scores
# Print the flesch reading ease score
flesch = readability_scores['flesch_score']
print('The Flesch Reading Ease is %.2f' % (flesch))
You now know to compute the Flesch reading ease score for a given body of text. Notice that the score for this essay is approximately 81.67. This indicates that the essay is at the readability level of a 6th grade American student.
Readability of various publications
In this exercise, you have been given excerpts of articles from four publications. Your task is to compute the readability of these excerpts using the Gunning fog index and consequently, determine the relative difficulty of reading these publications.
The excerpts are available as the following strings:
-
forbes
- An excerpt from an article from Forbes magazine on the Chinese social credit score system. -
harvard_law
- An excerpt from a book review published in Harvard Law Review. -
r_digest
- An excerpt from a Reader's Digest article on flight turbulence. -
time_kids
- An excerpt from an article on the ill effects of salt consumption published in TIME for Kids.
import glob
texts = []
text_list = glob.glob('./dataset/*.txt')
text_list
file_list = ['time_kids', 'forbes', 'r_digest', 'harvard_law']
for text in text_list:
for f in file_list:
if f in text:
with open(text, 'r') as f:
texts.append(f.read())
time_kids, forbes, r_digest, harvard_law = texts
excerpts = [forbes, harvard_law, r_digest, time_kids]
# Loop through excerpts and compute gunning fog index
gunning_fog_scores = []
for excerpt in excerpts:
readability_scores = Textatistic(excerpt).scores
gunning_fog = readability_scores['gunningfog_score']
gunning_fog_scores.append(gunning_fog)
# Print the gunning fog indices
print(gunning_fog_scores)
Notice that the Harvard Law Review excerpt has the highest Gunning fog index; indicating that it can be comprehended only by readers who have graduated college. On the other hand, the Time for Kids article, intended for children, has a much lower fog index and can be comprehended by 5th grade students.