Training Hello world model for Microcontrollers
In this post, we'll cover the basic tutorial for training simple regression model with tensorflow lite for for Microcontrollers(TFLM). This post is the summary of youtube video "TinyML Book Screencast - Training the Hello World model", presented by peter warden.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
plt.rcParams['figure.figsize'] = (16, 10)
plt.rcParams['text.usetex'] = True
plt.rc('font', size=15)
Tensorflow Lite for Microcontrollers
TensorFlow Lite for Microcontrollers (TFLM for short) is designed to run machine learning models on microcontrollers and other devices with only few kilobytes of memory. For the purpose of deploying machine learning model on embedded devices, it is also called TinyML. As you know already, Tensorflow is one of commonly-used deep learning frameworks, and from version 2.x they offer some machine learning features for embedded systems via tensorflow lite. Unlike high performance mobile processor(like cortex-A series), Microcontroller (Cortex-M series or ESP32) has low power consumptions and it can deploy in various ways of customer products, like refrigerator, wash-machine and so on.
Google introduced several supported boards for test,
- Arduino Nano 33 BLE Sense (using Arduino IDE)
- SparkFun Edge (building directly from source)
- STM32F746 Discovery kit (using Mbed)
- Adafruit EdgeBadge (using Arduino IDE)
- Adafruit TensorFlow Lite for Microcontrollers Kit (using Arduino IDE)
- Adafruit Circuit Playground Bluefruit (using Arduino IDE)
- Espressif ESP32-DevKitC (using ESP IDF)
- Espressif ESP-EYE (using ESP IDF)
Here, we'll implement the simple regression model in Sparkfun Edge. Most of contents are covered in Pete Warden's screencast. More informations are included in his book.
Hello world
Actually, "Hello world" may be the first program we faced, since it can show simple interaction between human and computer. TinyML also has simple example of "Hello world". Instead of Printing, we will build a model to generate the sine wave. So maybe our hypothesis will be like this,
$$ \tilde{H(x)} = \sin(x) $$
At first, we load the required packages,
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import math
plt.rcParams['figure.figsize'] = (16, 10)
plt.rc('font', size=15)
# define random seed for reproducibility
np.random.seed(1) # numpy seed
tf.random.set_seed(1) # tensorflow global random seed
print('Numpy: {}'.format(np.__version__))
print('Tensorflow: {}'.format(tf.__version__))
To train the model, it requires a sort of data, namely training data. In our case, we will sample the random data from numpy and generate the actual output from known model(the sine function)
# 0 to 2π, which covers a complete sine wave oscillation
X = np.random.uniform(
low=0, high=2*math.pi, size=10000).astype(np.float32)
# Shuffle the values to guarantee they're not in order
np.random.shuffle(X)
# Calculate the corresponding sine values
y = np.sin(X).astype(np.float32)
# Plot our data. The 'b.' argument tells the library to print blue dots.
plt.plot(X, y, 'b.')
plt.grid()
plt.show()
But this data doesn't reflect the real-world data, because there is no variation of distribution in dataset, also known as noise. We can add random data for each x, so it makes to seem more randomly distributed.
y += 0.1 * np.random.randn(*y.shape)
# Plot our data
plt.plot(X, y, 'b.')
plt.grid()
plt.show()
Through this, we expect that the model is approximated with sinusoid curve if it is well trained.
To use it for training, we are going to preprocess the data. We'll split it with following proportions,
- Train data: 60%
- Validation data: 20%
- Test data: 20%
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=1)
# Plot the data in each partition in different colors:
plt.plot(X_train, y_train, 'b.', label="Train")
plt.plot(X_val, y_val, 'y.', label="Validate")
plt.plot(X_test, y_test, 'r.', label="Test")
plt.legend()
plt.grid()
plt.show()
To train the model, we will use sequential model in tensorflow-keras, and add two Dense layer. Then we will add adam optimizer, and set mean squared error for loss.
model = tf.keras.Sequential(name='sine')
model.add(tf.keras.layers.Dense(16, activation='relu', input_shape=(1, )))
model.add(tf.keras.layers.Dense(1))
model.summary()
model.compile(optimizer='adam', loss='mse', metrics=['accuracy', 'mae'])
Now, it's time to train the model.
history = model.fit(X_train, y_train, epochs=1000, batch_size=15, validation_data=(X_val, y_val), verbose=False)
# the predicted and actual values during training and validation.
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'g.', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
SKIP = 50
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
loss = model.evaluate(X_test, y_test)
# Make predictions based on our test dataset
predictions = model.predict(X_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(X_test, y_test, 'b.', label='Actual')
plt.plot(X_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
model_2 = tf.keras.Sequential(name='sine2')
# First layer takes a scalar input and feeds it through 16 "neurons". The
# neurons decide whether to activate based on the 'relu' activation function.
model_2.add(tf.keras.layers.Dense(16, activation='relu', input_shape=(1,)))
# The new second layer may help the network learn more complex representations
model_2.add(tf.keras.layers.Dense(16, activation='relu'))
# Final layer is a single neuron, since we want to output a single value
model_2.add(tf.keras.layers.Dense(1))
# Compile the model using a standard optimizer and loss function for regression
model_2.compile(optimizer='adam', loss='mse', metrics=['accuracy', 'mae'])
model_2.summary()
history = model_2.fit(X_train, y_train, epochs=500, batch_size=64,
validation_data=(X_val, y_val), verbose=False)
# the predicted and actual values during training and validation.
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
# Exclude the first few epochs so the graph is easier to read
SKIP = 100
plt.figure()
plt.subplot(1, 2, 1)
plt.plot(epochs[SKIP:], loss[SKIP:], 'g.', label='Training loss')
plt.plot(epochs[SKIP:], val_loss[SKIP:], 'b.', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.subplot(1, 2, 2)
# Draw a graph of mean absolute error, which is another way of
# measuring the amount of error in the prediction.
mae = history.history['mae']
val_mae = history.history['val_mae']
plt.plot(epochs[SKIP:], mae[SKIP:], 'g.', label='Training MAE')
plt.plot(epochs[SKIP:], val_mae[SKIP:], 'b.', label='Validation MAE')
plt.title('Training and validation mean absolute error')
plt.xlabel('Epochs')
plt.ylabel('MAE')
plt.legend()
plt.tight_layout()
loss = model_2.evaluate(X_test, y_test)
# Make predictions based on our test dataset
predictions = model_2.predict(X_test)
# Graph the predictions against the actual values
plt.clf()
plt.title('Comparison of predictions and actual values')
plt.plot(X_test, y_test, 'b.', label='Actual')
plt.plot(X_test, predictions, 'r.', label='Predicted')
plt.legend()
plt.show()
import os
MODEL_DIR = './models/'
if not os.path.exists(MODEL_DIR):
os.mkdir(MODEL_DIR)
converter = tf.lite.TFLiteConverter.from_keras_model(model_2)
model_no_quant_tflite = converter.convert()
# # Save the model to disk
open(MODEL_DIR + 'model_no_quant.tflite', "wb").write(model_no_quant_tflite)
# Convert the model to the TensorFlow Lite format with quantization
def representative_dataset():
for i in range(500):
yield([X_train[i].reshape(1, 1)])
# Set the optimization flag.
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce full-int8 quantization (except inputs/outputs which are always float)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# Provide a representative dataset to ensure we quantize correctly.
converter.representative_dataset = representative_dataset
model_tflite = converter.convert()
# Save the model to disk
open(MODEL_DIR + 'model.tflite', "wb").write(model_tflite)
model_no_quant_size = os.path.getsize(MODEL_DIR + 'model_no_quant.tflite')
print("Model is %d bytes" % model_no_quant_size)
model_size = os.path.getsize(MODEL_DIR + 'model.tflite')
print("Quantized model is %d bytes" % model_size)
difference = model_no_quant_size - model_size
print("Difference is %d bytes" % difference)
model_no_quant = tf.lite.Interpreter(MODEL_DIR + 'model_no_quant.tflite')
model = tf.lite.Interpreter(MODEL_DIR + 'model.tflite')
# Allocate memory for each model
model_no_quant.allocate_tensors()
model.allocate_tensors()
# Get the input and output tensors so we can feed in values and get the results
model_no_quant_input = model_no_quant.tensor(model_no_quant.get_input_details()[0]["index"])
model_no_quant_output = model_no_quant.tensor(model_no_quant.get_output_details()[0]["index"])
model_input = model.tensor(model.get_input_details()[0]["index"])
model_output = model.tensor(model.get_output_details()[0]["index"])
# Create arrays to store the results
model_no_quant_predictions = np.empty(X_test.size)
model_predictions = np.empty(X_test.size)
# Run each model's interpreter for each value and store the results in arrays
for i in range(X_test.size):
model_no_quant_input().fill(X_test[i])
model_no_quant.invoke()
model_no_quant_predictions[i] = model_no_quant_output()[0]
model_input().fill(X_test[i])
model.invoke()
model_predictions[i] = model_output()[0]
# See how they line up with the data
plt.clf()
plt.title('Comparison of various models against actual values')
plt.plot(X_test, y_test, 'bo', label='Actual values', alpha=0.4)
plt.plot(X_test, predictions, 'ro', label='Original predictions')
plt.plot(X_test, model_no_quant_predictions, 'bx', label='Lite predictions')
plt.plot(X_test, model_predictions, 'gx', label='Lite quantized predictions')
plt.legend()
plt.show()
!xxd -i {MODEL_DIR + 'model.tflite'} > {MODEL_DIR + 'model.cc'}
!cat {MODEL_DIR + 'model.cc'}