Skip to content
Search
Generic filters
Exact matches only

How to Check-Point Deep Learning Models in Keras

Last Updated on October 3, 2019

Deep learning models can take hours, days or even weeks to train.

If the run is stopped unexpectedly, you can lose a lot of work.

In this post you will discover how you can check-point your deep learning models during training in Python using the Keras library.

Discover how to develop deep learning models for a range of predictive modeling problems with just a few lines of code in my new book, with 18 step-by-step tutorials and 9 projects.

Let’s get started.

  • Update Mar/2017: Updated for Keras 2.0.2, TensorFlow 1.0.1 and Theano 0.9.0.
  • Update March/2018: Added alternate link to download the dataset.
  • Update Sep/2019: Updated for Keras 2.2.5 API.
  • Update Oct/2019: Updated for Keras 2.3.0 API.

How to Check-Point Deep Learning Models in Keras

How to Check-Point Deep Learning Models in Keras
Photo by saragoldsmith, some rights reserved.

Checkpointing Neural Network Models

Application checkpointing is a fault tolerance technique for long running processes.

It is an approach where a snapshot of the state of the system is taken in case of system failure. If there is a problem, not all is lost. The checkpoint may be used directly, or used as the starting point for a new run, picking up where it left off.

When training deep learning models, the checkpoint is the weights of the model. These weights can be used to make predictions as is, or used as the basis for ongoing training.

The Keras library provides a checkpointing capability by a callback API.

The ModelCheckpoint callback class allows you to define where to checkpoint the model weights, how the file should named and under what circumstances to make a checkpoint of the model.

The API allows you to specify which metric to monitor, such as loss or accuracy on the training or validation dataset. You can specify whether to look for an improvement in maximizing or minimizing the score. Finally, the filename that you use to store the weights can include variables like the epoch number or metric.

The ModelCheckpoint can then be passed to the training process when calling the fit() function on the model.

Note, you may need to install the h5py library to output network weights in HDF5 format.

Need help with Deep Learning in Python?

Take my free 2-week email course and discover MLPs, CNNs and LSTMs (with code).

Click to sign-up now and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Checkpoint Neural Network Model Improvements

A good use of checkpointing is to output the model weights each time an improvement is observed during training.

The example below creates a small neural network for the Pima Indians onset of diabetes binary classification problem. The example assume that the pima-indians-diabetes.csv file is in your working directory.

You can download the dataset from here:

The example uses 33% of the data for validation.

Checkpointing is setup to save the network weights only when there is an improvement in classification accuracy on the validation dataset (monitor=’val_accuracy’ and mode=’max’). The weights are stored in a file that includes the score in the filename (weights-improvement-{val_accuracy=.2f}.hdf5).

# Checkpoint the weights when validation accuracy improves
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
import numpy
numpy.random.seed(seed)
# load pima indians dataset
dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation=’relu’))
model.add(Dense(8, activation=’relu’))
model.add(Dense(1, activation=’sigmoid’))
# Compile model
model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
# checkpoint
filepath=”weights-improvement-{epoch:02d}-{val_accuracy:.2f}.hdf5″
checkpoint = ModelCheckpoint(filepath, monitor=’val_accuracy’, verbose=1, save_best_only=True, mode=’max’)
callbacks_list = [checkpoint]
# Fit the model
model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

# Checkpoint the weights when validation accuracy improves

from keras.models import Sequential

from keras.layers import Dense

from keras.callbacks import ModelCheckpoint

import matplotlib.pyplot as plt

import numpy

numpy.random.seed(seed)

# load pima indians dataset

dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)

# split into input (X) and output (Y) variables

X = dataset[:,0:8]

Y = dataset[:,8]

# create model

model = Sequential()

model.add(Dense(12, input_dim=8, activation=’relu’))

model.add(Dense(8, activation=’relu’))

model.add(Dense(1, activation=’sigmoid’))

# Compile model

model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])

# checkpoint

filepath=”weights-improvement-{epoch:02d}-{val_accuracy:.2f}.hdf5″

checkpoint = ModelCheckpoint(filepath, monitor=’val_accuracy’, verbose=1, save_best_only=True, mode=’max’)

callbacks_list = [checkpoint]

# Fit the model

model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)

Running the example produces the following output (truncated for brevity):


Epoch 00134: val_accuracy did not improve
Epoch 00135: val_accuracy did not improve
Epoch 00136: val_accuracy did not improve
Epoch 00137: val_accuracy did not improve
Epoch 00138: val_accuracy did not improve
Epoch 00139: val_accuracy did not improve
Epoch 00140: val_accuracy improved from 0.83465 to 0.83858, saving model to weights-improvement-140-0.84.hdf5
Epoch 00141: val_accuracy did not improve
Epoch 00142: val_accuracy did not improve
Epoch 00143: val_accuracy did not improve
Epoch 00144: val_accuracy did not improve
Epoch 00145: val_accuracy did not improve
Epoch 00146: val_accuracy improved from 0.83858 to 0.84252, saving model to weights-improvement-146-0.84.hdf5
Epoch 00147: val_accuracy did not improve
Epoch 00148: val_accuracy improved from 0.84252 to 0.84252, saving model to weights-improvement-148-0.84.hdf5
Epoch 00149: val_accuracy did not improve

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

Epoch 00134: val_accuracy did not improve

Epoch 00135: val_accuracy did not improve

Epoch 00136: val_accuracy did not improve

Epoch 00137: val_accuracy did not improve

Epoch 00138: val_accuracy did not improve

Epoch 00139: val_accuracy did not improve

Epoch 00140: val_accuracy improved from 0.83465 to 0.83858, saving model to weights-improvement-140-0.84.hdf5

Epoch 00141: val_accuracy did not improve

Epoch 00142: val_accuracy did not improve

Epoch 00143: val_accuracy did not improve

Epoch 00144: val_accuracy did not improve

Epoch 00145: val_accuracy did not improve

Epoch 00146: val_accuracy improved from 0.83858 to 0.84252, saving model to weights-improvement-146-0.84.hdf5

Epoch 00147: val_accuracy did not improve

Epoch 00148: val_accuracy improved from 0.84252 to 0.84252, saving model to weights-improvement-148-0.84.hdf5

Epoch 00149: val_accuracy did not improve

You will see a number of files in your working directory containing the network weights in HDF5 format. For example:


weights-improvement-53-0.76.hdf5
weights-improvement-71-0.76.hdf5
weights-improvement-77-0.78.hdf5
weights-improvement-99-0.78.hdf5

weights-improvement-53-0.76.hdf5

weights-improvement-71-0.76.hdf5

weights-improvement-77-0.78.hdf5

weights-improvement-99-0.78.hdf5

This is a very simple checkpointing strategy. It may create a lot of unnecessary check-point files if the validation accuracy moves up and down over training epochs. Nevertheless, it will ensure that you have a snapshot of the best model discovered during your run.

Checkpoint Best Neural Network Model Only

A simpler check-point strategy is to save the model weights to the same file, if and only if the validation accuracy improves.

This can be done easily using the same code from above and changing the output filename to be fixed (not include score or epoch information).

In this case, model weights are written to the file “weights.best.hdf5” only if the classification accuracy of the model on the validation dataset improves over the best seen so far.

# Checkpoint the weights for best model on validation accuracy
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
import numpy
# load pima indians dataset
dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation=’relu’))
model.add(Dense(8, activation=’relu’))
model.add(Dense(1, activation=’sigmoid’))
# Compile model
model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
# checkpoint
filepath=”weights.best.hdf5″
checkpoint = ModelCheckpoint(filepath, monitor=’val_accuracy’, verbose=1, save_best_only=True, mode=’max’)
callbacks_list = [checkpoint]
# Fit the model
model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

# Checkpoint the weights for best model on validation accuracy

from keras.models import Sequential

from keras.layers import Dense

from keras.callbacks import ModelCheckpoint

import matplotlib.pyplot as plt

import numpy

# load pima indians dataset

dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)

# split into input (X) and output (Y) variables

X = dataset[:,0:8]

Y = dataset[:,8]

# create model

model = Sequential()

model.add(Dense(12, input_dim=8, activation=’relu’))

model.add(Dense(8, activation=’relu’))

model.add(Dense(1, activation=’sigmoid’))

# Compile model

model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])

# checkpoint

filepath=”weights.best.hdf5″

checkpoint = ModelCheckpoint(filepath, monitor=’val_accuracy’, verbose=1, save_best_only=True, mode=’max’)

callbacks_list = [checkpoint]

# Fit the model

model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)

Running this example provides the following output (truncated for brevity):


Epoch 00139: val_accuracy improved from 0.79134 to 0.79134, saving model to weights.best.hdf5
Epoch 00140: val_accuracy did not improve
Epoch 00141: val_accuracy did not improve
Epoch 00142: val_accuracy did not improve
Epoch 00143: val_accuracy did not improve
Epoch 00144: val_accuracy improved from 0.79134 to 0.79528, saving model to weights.best.hdf5
Epoch 00145: val_accuracy improved from 0.79528 to 0.79528, saving model to weights.best.hdf5
Epoch 00146: val_accuracy did not improve
Epoch 00147: val_accuracy did not improve
Epoch 00148: val_accuracy did not improve
Epoch 00149: val_accuracy did not improve

Epoch 00139: val_accuracy improved from 0.79134 to 0.79134, saving model to weights.best.hdf5

Epoch 00140: val_accuracy did not improve

Epoch 00141: val_accuracy did not improve

Epoch 00142: val_accuracy did not improve

Epoch 00143: val_accuracy did not improve

Epoch 00144: val_accuracy improved from 0.79134 to 0.79528, saving model to weights.best.hdf5

Epoch 00145: val_accuracy improved from 0.79528 to 0.79528, saving model to weights.best.hdf5

Epoch 00146: val_accuracy did not improve

Epoch 00147: val_accuracy did not improve

Epoch 00148: val_accuracy did not improve

Epoch 00149: val_accuracy did not improve

You should see the weight file in your local directory.

This is a handy checkpoint strategy to always use during your experiments. It will ensure that your best model is saved for the run for you to use later if you wish. It avoids you needing to include code to manually keep track and serialize the best model when training.

Loading a Check-Pointed Neural Network Model

Now that you have seen how to checkpoint your deep learning models during training, you need to review how to load and use a checkpointed model.

The checkpoint only includes the model weights. It assumes you know the network structure. This too can be serialize to file in JSON or YAML format.

In the example below, the model structure is known and the best weights are loaded from the previous experiment, stored in the working directory in the weights.best.hdf5 file.

The model is then used to make predictions on the entire dataset.

# How to load and use weights from a checkpoint
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
import numpy
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation=’relu’))
model.add(Dense(8, activation=’relu’))
model.add(Dense(1, activation=’sigmoid’))
# load weights
model.load_weights(“weights.best.hdf5”)
# Compile model (required to make predictions)
model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
print(“Created model and loaded weights from file”)
# load pima indians dataset
dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# estimate accuracy on whole dataset using loaded weights
scores = model.evaluate(X, Y, verbose=0)
print(“%s: %.2f%%” % (model.metrics_names[1], scores[1]*100))

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

# How to load and use weights from a checkpoint

from keras.models import Sequential

from keras.layers import Dense

from keras.callbacks import ModelCheckpoint

import matplotlib.pyplot as plt

import numpy

# create model

model = Sequential()

model.add(Dense(12, input_dim=8, activation=’relu’))

model.add(Dense(8, activation=’relu’))

model.add(Dense(1, activation=’sigmoid’))

# load weights

model.load_weights(“weights.best.hdf5”)

# Compile model (required to make predictions)

model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])

print(“Created model and loaded weights from file”)

# load pima indians dataset

dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)

# split into input (X) and output (Y) variables

X = dataset[:,0:8]

Y = dataset[:,8]

# estimate accuracy on whole dataset using loaded weights

scores = model.evaluate(X, Y, verbose=0)

print(“%s: %.2f%%” % (model.metrics_names[1], scores[1]*100))

Running the example produces the following output:

Created model and loaded weights from file
acc: 77.73%

Created model and loaded weights from file

acc: 77.73%

Summary

In this post you have discovered the importance of checkpointing deep learning models for long training runs.

You learned two checkpointing strategies that you can use on your next deep learning project:

  1. Checkpoint Model Improvements.
  2. Checkpoint Best Model Only.

You also learned how to load a checkpointed model and make predictions.

Do you have any questions about checkpointing deep learning models or about this post? Ask your questions in the comments and I will do my best to answer.

Develop Deep Learning Projects with Python!

Deep Learning with Python

 What If You Could Develop A Network in Minutes

…with just a few lines of Python

Discover how in my new Ebook:
Deep Learning With Python

It covers end-to-end projects on topics like:
Multilayer Perceptrons, Convolutional Nets and Recurrent Neural Nets, and more…

Finally Bring Deep Learning To
Your Own Projects

Skip the Academics. Just Results.

See What’s Inside

error: Content is protected !!