Linear Regression Model
In this post, We will cover the use case of Linear Regression Model through StatsModels and scikit-learn.
- Resources & Credits
- Packages
- Credit - Load the dataset and EDA
- Credit - Data Preprocessing
- Credit - Ordinary Least Squares (OLS)
- Advertising - Load the dataset and EDA
- Advertising - Interaction term
- Advertising - Ordinary Least Squares (OLS)
- KNN - Data Preparation
- KNN - Training
- KNN - Evaluation
Resources & Credits
The dataset that we use are from the book Introduction to Statistical Learning
by Gareth James, Daniela Witten, Trevor Hastie, and Rob Tibshirani. You can check the details in here.
import numpy as np
import pandas as pd
import statsmodels.api as sm
import statsmodels.formula.api as smf
import matplotlib.pyplot as plt
credit = pd.read_csv('./dataset/Credit-isl.csv', index_col=0)
credit = credit[['Gender', 'Balance']]
credit.shape
credit.head()
Credit - Data Preprocessing
Currently, Gender
variable is categorical variable, and its type is text. In order to use it as an feature, we need to convert from text to integer (or binary).
We can use label encoder which can convert categorical data to numerical, but in this time we'll use lambda notation to convert it.
X = credit['Gender'].to_numpy()
X[:6]
X_temp = list(map(lambda x: 1 if x == 'Female' else -1, X))
np.array(X_temp[:6])
y = credit['Balance'].to_numpy()
y[:6]
To insert the data in to statsmodel, we also need to convert it with dataframe.
df = pd.DataFrame({'y': y, 'X_temp': X_temp},
columns=['y', 'X_temp'])
df.head()
Credit - Ordinary Least Squares (OLS)
OLS is a type of linear least squares for estimating unknown parameters in a linear regression model. And it chooses the parameters of a linear function of a set of explanatory variables by the principles of least squares. You can train the model with $$ y \sim x_0 + x_1 + \dots $$
which is similar in R
.
categorical_reg = smf.ols(formula='y ~ x_temp', data=df)
result = categorical_reg.fit()
result.summary()
You can find the coefficient of each variable, and also find the p-value for determining the validity of that variables.
advertising = pd.read_csv('./dataset/Advertising.csv', index_col=0)
advertising.head()
And we will use two variables(TV
, radio
) for predict sales
variable.
X = advertising[['TV', 'radio']].to_numpy()
X[:6]
Advertising - Interaction term
In this time, we will add the iteraction term, which is the comination of existed variables. For example, if we have $x_0$, and $x_1$ variables, when we try to build the linear model with interaction term, the form will be like this:
$$ y \sim x_0 + x_1 + x_0 \cdot x_1 + \dots $$
In this case, we just add the two-way interaction since the new iteraction term is made with two independent variables.
inter_X = X[:, 0] * X[:, 1]
inter_X[:6]
y = advertising['sales'].to_numpy()
y[:6]
df = pd.DataFrame({'sales': y, 'TV':X[:, 0], 'radio':X[:, 1], 'inter':inter_X},
columns=['sales', 'TV', 'radio', 'inter'])
df.head()
inter_reg = smf.ols(formula='sales ~ TV + radio + inter', data=df)
result = inter_reg.fit()
result.summary()
np.random.seed(1)
X_train = np.sort(np.random.uniform(-1, 1, 200))[:, np.newaxis]
X_train[:6]
Here, we made a simple true label as follows:
$$ y = 10 * x - 10 * x^3 + \epsilon $$
y_train = 10 * X_train - 10 * np.power(X_train, 3) + np.random.normal(0, 1, 200)[:, np.newaxis]
y_train[:6]
X_test = np.random.uniform(-1, 1, 100)[:, np.newaxis]
X_test[:6]
y_test = 10 * X_test - 10 * np.power(X_test, 3) + np.random.normal(0, 1, 100)[:, np.newaxis]
y_test[:6]
The train data will be like this:
plt.plot(X_train, y_train, marker='o', markersize=3, linestyle='none')
from sklearn.neighbors import KNeighborsRegressor
knn = KNeighborsRegressor(n_neighbors=3)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
y_pred[:6]
You can use mean_squared_error
implemented in sklearn.metrics
, but we can also manually implement it.
mse = (((y_pred - y_test) ** 2).sum() / len(y_pred))
mse
simple_reg = sm.OLS(y_train, X_train).fit()
y_pred_simple = simple_reg.predict(X_test)
baseline_mse = ((y_pred_simple - np.squeeze(y_test)) ** 2).sum() / len(y_pred_simple)
baseline_mse
It is important to choose which n_neighbors
is optimal.
mse = []
for k in range(1, 100):
knn = KNeighborsRegressor(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
mse.append(((y_pred - y_test) ** 2).sum() / len(y_test))
plt.hlines(baseline_mse, 0, 1, linestyle='--')
plt.plot(1 / np.arange(1, 100), mse)
plt.xlabel('1/k')
plt.ylabel('MSE')
plt.show()