import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

plt.rcParams['figure.figsize'] = (10, 5)
plt.style.use('fivethirtyeight')


## Intro to time series and stationarity

### Exploration

In this exercise you will kick off your journey to become an ARIMA master by loading and plotting a time series. You probably do this all the time, but this is just a refresher.

You will be exploring a dataset of monthly US candy production between 1972 and 2018.

Specifically, you are plotting the industrial production index IPG3113N. This is total amount of sugar and confectionery products produced in the USA per month, as a percentage of the January 2012 production. So 120 would be 120% of the January 2012 industrial production.

Check out how this quantity has changed over time and how it changes throughout the year.

candy = pd.read_csv('./dataset/candy_production.csv', index_col='date', parse_dates=True)

# Plot ant show the time series on axis ax
fig, ax = plt.subplots();
candy.plot(ax=ax);


### Train-test splits

In this exercise you are going to take the candy production dataset and split it into a train and a test set. Like you understood in the video exercise, the reason to do this is so that you can test the quality of your model fit when you are done.

candy_train = candy.loc[:'2006']
candy_test = candy.loc['2007':]

# Create an axis
fig, ax = plt.subplots();

# Plot the train and test sets on the axis ax
candy_train.plot(ax=ax);
candy_test.plot(ax=ax);
plt.savefig('../images/train_test.png')


### Is it stationary

Identifying whether a time series is stationary or non-stationary is very important. If it is stationary you can use ARMA models to predict the next values of the time series. If it is non-stationary then you cannot use ARMA models, however, as you will see in the next lesson, you can often transform non-stationary time series to stationary ones.

## Making time series stationary

• The augmented Dicky-Fuller test
• Tests for non-stationary
• Null hypothesis is time series is non-stationary

### Augmented Dicky-Fuller

In this exercise you will run the augmented Dicky-Fuller test on the earthquakes time series to test for stationarity. You plotted this time series in the last exercise. It looked like it could be stationary, but earthquakes are very damaging. If you want to make predictions about them you better be sure.

Remember that if it were not stationary this would mean that the number of earthquakes per year has a trend and is changing. This would be terrible news if it is trending upwards, as it means more damage. It would also be terrible news if it were trending downwards, it might suggest the core of our planet is changing and this could have lots of knock on effects for us!

earthquake = pd.read_csv('./dataset/earthquakes.csv', index_col='date', parse_dates=True)
earthquake.drop(['Year'], axis=1, inplace=True)

earthquake

earthquakes_per_year
date
1900-01-01 13.0
1901-01-01 14.0
1902-01-01 8.0
1903-01-01 10.0
1904-01-01 16.0
... ...
1994-01-01 15.0
1995-01-01 25.0
1996-01-01 22.0
1997-01-01 20.0
1998-01-01 16.0

99 rows × 1 columns

from statsmodels.tsa.stattools import adfuller

# Run test

# Print test statistic
print(result[0])

# Print p-value
print(result[1])

# Print critical values
print(result[4])

-3.1831922511917816
0.020978425256003668
{'1%': -3.5003788874873405, '5%': -2.8921519665075235, '10%': -2.5830997960069446}


### Taking the difference

In this exercise, you will to prepare a time series of the population of a city for modeling. If you could predict the growth rate of a city then it would be possible to plan and build the infrastructure that the city will need later, thus future-proofing public spending. In this case the time series is fictitious but its perfect to practice on.

You will test for stationarity by eye and use the Augmented Dicky-Fuller test, and take the difference to make the dataset stationary.

city = pd.read_csv('./dataset/city.csv', parse_dates=True, index_col='date')

result = adfuller(city['city_population'])

# Plot the time series
fig, ax = plt.subplots();
city.plot(ax=ax);

# Print the test statistic and the p-value
print('p-value:', result[1])

ADF Statistic: 5.297698878151179
p-value: 1.0

city_stationary = city.diff().dropna()

# Run ADF test on the differenced time series

# Plot the differenced time series
fig, ax = plt.subplots();
city_stationary.plot(ax=ax);

# Print the test statistic and the p-value
print('p-value:', result[1])

ADF Statistic: -0.8146211646181923
p-value: 0.8147894381484837

city_stationary = city.diff().diff().dropna()

# Run ADF test on the differenced time series

# Plot the differenced time series
fig, ax = plt.subplots();
city_stationary.plot(ax=ax);

# Print the test statistic and the p-value
print('p-value:', result[1])

ADF Statistic: -6.433646032918725
p-value: 1.673449851040202e-08


### Other tranforms

Differencing should be the first transform you try to make a time series stationary. But sometimes it isn't the best option.

A classic way of transforming stock time series is the log-return of the series. This is calculated as follows:

$$\text{log_return} (y_t) = \log \big(\frac{y_t}{y_{t-1}}\big)$$

You can calculate the log-return of this DataFrame by substituting:

$y_t$ → amazon

$y_{t-1}$ → amazon.shift(1)

$\log()$ → np.log()

In this exercise you will compare the log-return transform and the first order difference of the Amazon stock time series to find which is better for making the time series stationary.

amazon = pd.read_csv('./dataset/amazon_close.csv', index_col='date', parse_dates=True)

close
date
2019-02-08 1588.22
2019-02-07 1614.37
2019-02-06 1640.26
2019-02-05 1658.81
2019-02-04 1633.31
amazon_diff = amazon.diff()
amazon_diff = amazon_diff.dropna()

# Run test and print
print(result_diff)

(-7.2035794888112035, 2.3312717254877214e-10, 23, 1234, {'1%': -3.435660336370594, '5%': -2.863885022214541, '10%': -2.568018522153254}, 10764.626718933836)

amazon_log = np.log(amazon.div(amazon.shift(1)))
amazon_log = amazon_log.dropna()

# Run test and print
print(result_log)

(-34.915748536059674, 0.0, 0, 1257, {'1%': -3.4355629707955395, '5%': -2.863842063387667, '10%': -2.567995644141416}, -6245.723147672197)


## Intro to AR, MA and ARMA models

• AR models
• Autoregressive (AR) model
• AR(1) model: $$y_t = a_1 y_{t-1} + \epsilon_t$$
• AR(2) model: $$y_t = a_1 y_{t-1} + a_2 y_{t-2} + \epsilon_t$$
• AR(p) model: $$y_t = a_1 y_{t-1} + a_2 y_{t-2} + \cdots + a_p y_{t-p} + \epsilon_t$$
• MA models
• Moving Average (MA) model
• MA(1) model: $$y_t = m_1 \epsilon_{t-1} + \epsilon_t$$
• MA(2) model: $$y_t = m_1 \epsilon_{t-1} + m_2 \epsilon_{t-2} + \epsilon_t$$
• MA(q) model: $$y_t = m_1 \epsilon_{t-1} + m_2 \epsilon_{t-2} + \cdots + m_q \epsilon_{t-q} + \epsilon_t$$
• ARMA models
• Autoregressive moving-average (ARMA) model
• ARMA = AR + MA
• ARMA(1, 1) model: $$y_t = a_1 y_{t-1} + m_1 \epsilon_{t-1} + \epsilon_t$$
• ARMA(p, q) model:
• p is order of AR part
• q is order of MA part

### Model order

When fitting and working with AR, MA and ARMA models it is very important to understand the model order. You will need to pick the model order when fitting. Picking this correctly will give you a better fitting model which makes better predictions. So in this section you will practice working with model order.

ar_coefs = [1, 0.4, -0.1]
ma_coefs = [1, 0.2]

from statsmodels.tsa.arima_process import arma_generate_sample

y = arma_generate_sample(ar_coefs, ma_coefs, nsample=100, scale=0.5)

plt.plot(y);


### Generating ARMA data

In this exercise you will generate 100 days worth of AR/MA/ARMA data. Remember that in the real world applications, this data could be changes in Google stock prices, the energy requirements of New York City, or the number of cases of flu.

You can use the arma_generate_sample() function available in your workspace to generate time series using different AR and MA coefficients.

Remember for any model ARMA(p,q):

• The list ar_coefs has the form $[1, -a_1, -a_2, ..., -a_p]$.
• The list ma_coefs has the form $[1, m_1, m_2, ..., m_q]$,

where $a_i$ are the lag-i AR coefficients and $m_j$ are the lag-j MA coefficients.

np.random.seed(1)


#### MA(1) model with MA lag coefficient of -0.7

ar_coefs = [1]
ma_coefs = [1, -0.7]

# Generate data
y = arma_generate_sample(ar_coefs, ma_coefs, nsample=100, scale=0.5)

plt.plot(y);
plt.ylabel(r'$y_t$');
plt.xlabel(r'$t$');


#### AR(2) model with AR lag-1 and lag-2 coefficients of 0.3 and 0.2

np.random.seed(2)

# Set coefficients
ar_coefs = [1, -0.3, -0.2]
ma_coefs = [1]

# Generate data
y = arma_generate_sample(ar_coefs, ma_coefs, nsample=100, scale=0.5)

plt.plot(y);
plt.ylabel(r'$y_t$');
plt.xlabel(r'$t$');


#### ARMA model with $y_t = -0.2 y_{t-1} + 0.3 \epsilon_{t-1} + 0.4 \epsilon_{t-2} + \epsilon_t$

np.random.seed(3)

# Set coefficients
ar_coefs = [1, 0.2]
ma_coefs = [1, 0.3, 0.4]

# Generate data
y = arma_generate_sample(ar_coefs, ma_coefs, nsample=100, scale=0.5)

plt.plot(y);
plt.ylabel(r'$y_t$');
plt.xlabel(r'$t$');


### Fitting Prelude

Great, you understand model order! Understanding the order is important when it comes to fitting models. You will always need to select the order of model you fit to your data, no matter what that data is.

In this exercise you will do some basic fitting. Fitting models is the next key step towards making predictions. We'll go into this more in the next chapter but let's get a head start.

Some example ARMA(1,1) data have been created and are available in your environment as y. This data could represent the amount of traffic congestion. You could use forecasts of this to suggest the efficient routes for drivers.

from statsmodels.tsa.arima_model import ARMA

# Instantiate the model
model = ARMA(y, order=(1, 1))

# Fit the model
results = model.fit()