The Keras Functional API
In this chapter, you'll become familiar with the basics of the Keras functional API. You'll build a simple functional network using functional building blocks, fit it to data, and make predictions. This is the Summary of lecture "Advanced Deep Learning with Keras", via datacamp.
- Keras input and dense layers
- Build and compile a model
- Fit and evaluate a model
import tensorflow as tf import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.rcParams['figure.figsize'] = (8, 8)
The first step in creating a neural network model is to define the Input layer. This layer takes in raw data, usually in the form of numpy arrays. The shape of the Input layer defines how many variables your neural network will use. For example, if the input data has 10 columns, you define an Input layer with a shape of
In this case, you are only using one input in your network.
from tensorflow.keras.layers import Input # Create an input layer of shape 1 input_tensor = Input(shape=(1, ))
Once you have an Input layer, the next step is to add a Dense layer.
Dense layers learn a weight matrix, where the first dimension of the matrix is the dimension of the input data, and the second dimension is the dimension of the output data. Recall that your Input layer has a shape of 1. In this case, your output layer will also have a shape of 1. This means that the Dense layer will learn a 1x1 weight matrix.
In this exercise, you will add a dense layer to your model, after the input layer.
from tensorflow.keras.layers import Dense # Input layer input_tensor = Input(shape=(1, )) # Dense layer output_layer = Dense(1) # Connect the dense layer to the input_tensor output_tensor = output_layer(input_tensor)
Output layers are simply Dense layers! Output layers are used to reduce the dimension of the inputs to the dimension of the outputs. You'll learn more about output dimensions in chapter 4, but for now, you'll always use a single output in your neural networks, which is equivalent to
Dense(1) or a dense layer with a single unit.
input_tensor = Input(shape=(1, )) # Create a dense layer and connect the dense layer to the input_tensor in one step # Note that we did this in 2 steps in the previous exercise, but are doing it in one step now output_tensor = Dense(1)(input_tensor)
from tensorflow.keras.models import Model input_tensor = Input(shape=(1, )) output_tensor = Dense(1)(input_tensor) # Built the model model = Model(input_tensor, output_tensor)
Compile a model
The final step in creating a model is compiling it. Now that you've created a model, you have to compile it before you can fit it to data. This finalizes your model, freezes all its settings, and prepares it to meet some data!
During compilation, you specify the optimizer to use for fitting the model to the data, and a loss function.
'adam' is a good default optimizer to use, and will generally work well. Loss function depends on the problem at hand. Mean squared error is a common loss function and will optimize for predicting the mean, as is done in least squares regression.
Mean absolute error optimizes for the median and is used in quantile regression. For this dataset,
'mean_absolute_error' works pretty well, so use it as your loss function.
plot_model, you need to install pydot, pydotplus, and graphviz. After install them, restart the kernel.
sudo apt install graphviz pip install pydot pydotplus graphviz
from tensorflow.keras.utils import plot_model # Summarize the model model.summary() # Plot the model plot_model(model, to_file='../images/plot_model.png') # Display the image data = plt.imread('../images/plot_model.png') plt.imshow(data);
Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_5 (InputLayer) [(None, 1)] 0 _________________________________________________________________ dense_3 (Dense) (None, 1) 2 ================================================================= Total params: 2 Trainable params: 2 Non-trainable params: 0 _________________________________________________________________
Fit the model to the tournament basketball data
Now that the model is compiled, you are ready to fit it to some data!
In this exercise, you'll use a dataset of scores from US College Basketball tournament games. Each row of the dataset has the team ids:
team_2, as integers. It also has the seed difference between the teams (seeds are assigned by the tournament committee and represent a ranking of how strong the teams are) and the score difference of the game (e.g. if
team_1 wins by 5 points, the score difference is 5).
To fit the model, you provide a matrix of X variables (in this case one column: the seed difference) and a matrix of Y variables (in this case one column: the score difference).
games_tourney = pd.read_csv('./dataset/games_tourney.csv') games_tourney.head()
from sklearn.model_selection import train_test_split games_tourney_train, games_tourney_test = train_test_split(games_tourney, test_size=0.3)
input_tensor = Input(shape=(1, )) output_tensor = Dense(1)(input_tensor) model = Model(input_tensor, output_tensor) model.compile(optimizer='adam', loss='mean_absolute_error')
model.fit(games_tourney_train['seed_diff'], games_tourney_train['score_diff'], epochs=1, batch_size=128, validation_split=0.1, verbose=True);
21/21 [==============================] - 0s 8ms/step - loss: 9.5143 - val_loss: 9.5148
Evaluate the model on a test set
After fitting the model, you can evaluate it on new data. You will give the model a new
X matrix (also called test data), allow it to make predictions, and then compare to the known
y variable (also called target data).
In this case, you'll use data from the post-season tournament to evaluate your model. The tournament games happen after the regular season games you used to train our model, and are therefore a good evaluation of how well your model performs out-of-sample.
X_test = games_tourney_test['seed_diff'] # Load the y variable from the test data y_test = games_tourney_test['score_diff'] # Evaluate the model on the test data print(model.evaluate(X_test, y_test, verbose=False))