Skip to content
Search
Generic filters
Exact matches only

How to Save and Load Your Keras Deep Learning Model

Last Updated on September 13, 2019

Keras is a simple and powerful Python library for deep learning.

Given that deep learning models can take hours, days and even weeks to train, it is important to know how to save and load them from disk.

In this post, you will discover how you can save your Keras models to file and load them up again to make predictions.

After reading this tutorial you will know:

  • How to save model weights and model architecture in separate files.
  • How to save model architecture in both YAML and JSON format.
  • How to save model weights and architecture into a single file for later use.

Discover how to develop deep learning models for a range of predictive modeling problems with just a few lines of code in my new book, with 18 step-by-step tutorials and 9 projects.

Let’s get started.

  • Update Mar 2017: Added instructions to install h5py first.
  • Update Mar/2017: Updated examples for changes to the Keras API.
  • Update Mar/2018: Added alternate link to download the dataset.
  • Update May/2019: Added section on saving and loading the model to a single file.
  • Update Sep/2019: Added note about using PyYAML version 5.

Save and Load Your Keras Deep Learning Models

How to Save and Load Your Keras Deep Learning Models
Photo by art_inthecity, some rights reserved.

Tutorial Overview

If you are new to Keras or deep learning, see this step-by-step Keras tutorial.

Keras separates the concerns of saving your model architecture and saving your model weights.

Model weights are saved to HDF5 format. This is a grid format that is ideal for storing multi-dimensional arrays of numbers.

The model structure can be described and saved using two different formats: JSON and YAML.

In this post we are going to look at two examples of saving and loading your model to file:

  • Save Model to JSON.
  • Save Model to YAML.

Each example will also demonstrate saving and loading your model weights to HDF5 formatted files.

The examples will use the same simple network trained on the Pima Indians onset of diabetes binary classification dataset. This is a small dataset that contains all numerical data and is easy to work with. You can download this dataset and place it in your working directory with the filename “pima-indians-diabetes.csv” (update: download from here).

Confirm that you have the latest version of Keras installed (e.g. v2.2.4 as of May 2019).

Note: Saving models requires that you have the h5py library installed. You can install it easily as follows:

Need help with Deep Learning in Python?

Take my free 2-week email course and discover MLPs, CNNs and LSTMs (with code).

Click to sign-up now and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Save Your Neural Network Model to JSON

JSON is a simple file format for describing data hierarchically.

Keras provides the ability to describe any model using JSON format with a to_json() function. This can be saved to file and later loaded via the model_from_json() function that will create a new model from the JSON specification.

The weights are saved directly from the model using the save_weights() function and later loaded using the symmetrical load_weights() function.

The example below trains and evaluates a simple model on the Pima Indians dataset. The model is then converted to JSON format and written to model.json in the local directory. The network weights are written to model.h5 in the local directory.

The model and weight data is loaded from the saved files and a new model is created. It is important to compile the loaded model before it is used. This is so that predictions made using the model can use the appropriate efficient computation from the Keras backend.

The model is evaluated in the same way printing the same evaluation score.

# MLP for Pima Indians Dataset Serialize to JSON and HDF5
from keras.models import Sequential
from keras.layers import Dense
from keras.models import model_from_json
import numpy
import os
# fix random seed for reproducibility
numpy.random.seed(7)
# load pima indians dataset
dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation=’relu’))
model.add(Dense(8, activation=’relu’))
model.add(Dense(1, activation=’sigmoid’))
# Compile model
model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print(“%s: %.2f%%” % (model.metrics_names[1], scores[1]*100))

# serialize model to JSON
model_json = model.to_json()
with open(“model.json”, “w”) as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights(“model.h5”)
print(“Saved model to disk”)

# later…

# load json and create model
json_file = open(‘model.json’, ‘r’)
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights(“model.h5”)
print(“Loaded model from disk”)

# evaluate loaded model on test data
loaded_model.compile(loss=’binary_crossentropy’, optimizer=’rmsprop’, metrics=[‘accuracy’])
score = loaded_model.evaluate(X, Y, verbose=0)
print(“%s: %.2f%%” % (loaded_model.metrics_names[1], score[1]*100))

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

# MLP for Pima Indians Dataset Serialize to JSON and HDF5

from keras.models import Sequential

from keras.layers import Dense

from keras.models import model_from_json

import numpy

import os

# fix random seed for reproducibility

numpy.random.seed(7)

# load pima indians dataset

dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)

# split into input (X) and output (Y) variables

X = dataset[:,0:8]

Y = dataset[:,8]

# create model

model = Sequential()

model.add(Dense(12, input_dim=8, activation=’relu’))

model.add(Dense(8, activation=’relu’))

model.add(Dense(1, activation=’sigmoid’))

# Compile model

model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])

# Fit the model

model.fit(X, Y, epochs=150, batch_size=10, verbose=0)

# evaluate the model

scores = model.evaluate(X, Y, verbose=0)

print(“%s: %.2f%%” % (model.metrics_names[1], scores[1]*100))

 

# serialize model to JSON

model_json = model.to_json()

with open(“model.json”, “w”) as json_file:

    json_file.write(model_json)

# serialize weights to HDF5

model.save_weights(“model.h5”)

print(“Saved model to disk”)

 

# later…

 

# load json and create model

json_file = open(‘model.json’, ‘r’)

loaded_model_json = json_file.read()

json_file.close()

loaded_model = model_from_json(loaded_model_json)

# load weights into new model

loaded_model.load_weights(“model.h5”)

print(“Loaded model from disk”)

 

# evaluate loaded model on test data

loaded_model.compile(loss=’binary_crossentropy’, optimizer=’rmsprop’, metrics=[‘accuracy’])

score = loaded_model.evaluate(X, Y, verbose=0)

print(“%s: %.2f%%” % (loaded_model.metrics_names[1], score[1]*100))

Running this example provides the output below.

acc: 78.78%
Saved model to disk
Loaded model from disk
acc: 78.78%

acc: 78.78%

Saved model to disk

Loaded model from disk

acc: 78.78%

The JSON format of the model looks like the following:

{
“class_name”:”Sequential”,
“config”:{
“name”:”sequential_1″,
“layers”:[
{
“class_name”:”Dense”,
“config”:{
“name”:”dense_1″,
“trainable”:true,
“batch_input_shape”:[
null,
8
],
“dtype”:”float32″,
“units”:12,
“activation”:”relu”,
“use_bias”:true,
“kernel_initializer”:{
“class_name”:”VarianceScaling”,
“config”:{
“scale”:1.0,
“mode”:”fan_avg”,
“distribution”:”uniform”,
“seed”:null
}
},
“bias_initializer”:{
“class_name”:”Zeros”,
“config”:{

}
},
“kernel_regularizer”:null,
“bias_regularizer”:null,
“activity_regularizer”:null,
“kernel_constraint”:null,
“bias_constraint”:null
}
},
{
“class_name”:”Dense”,
“config”:{
“name”:”dense_2″,
“trainable”:true,
“dtype”:”float32″,
“units”:8,
“activation”:”relu”,
“use_bias”:true,
“kernel_initializer”:{
“class_name”:”VarianceScaling”,
“config”:{
“scale”:1.0,
“mode”:”fan_avg”,
“distribution”:”uniform”,
“seed”:null
}
},
“bias_initializer”:{
“class_name”:”Zeros”,
“config”:{

}
},
“kernel_regularizer”:null,
“bias_regularizer”:null,
“activity_regularizer”:null,
“kernel_constraint”:null,
“bias_constraint”:null
}
},
{
“class_name”:”Dense”,
“config”:{
“name”:”dense_3″,
“trainable”:true,
“dtype”:”float32″,
“units”:1,
“activation”:”sigmoid”,
“use_bias”:true,
“kernel_initializer”:{
“class_name”:”VarianceScaling”,
“config”:{
“scale”:1.0,
“mode”:”fan_avg”,
“distribution”:”uniform”,
“seed”:null
}
},
“bias_initializer”:{
“class_name”:”Zeros”,
“config”:{

}
},
“kernel_regularizer”:null,
“bias_regularizer”:null,
“activity_regularizer”:null,
“kernel_constraint”:null,
“bias_constraint”:null
}
}
]
},
“keras_version”:”2.2.5″,
“backend”:”tensorflow”
}

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

{  

   “class_name”:”Sequential”,

   “config”:{  

      “name”:”sequential_1″,

      “layers”:[  

         {  

            “class_name”:”Dense”,

            “config”:{  

               “name”:”dense_1″,

               “trainable”:true,

               “batch_input_shape”:[  

                  null,

                  8

               ],

               “dtype”:”float32″,

               “units”:12,

               “activation”:”relu”,

               “use_bias”:true,

               “kernel_initializer”:{  

                  “class_name”:”VarianceScaling”,

                  “config”:{  

                     “scale”:1.0,

                     “mode”:”fan_avg”,

                     “distribution”:”uniform”,

                     “seed”:null

                  }

               },

               “bias_initializer”:{  

                  “class_name”:”Zeros”,

                  “config”:{  

 

                  }

               },

               “kernel_regularizer”:null,

               “bias_regularizer”:null,

               “activity_regularizer”:null,

               “kernel_constraint”:null,

               “bias_constraint”:null

            }

         },

         {  

            “class_name”:”Dense”,

            “config”:{  

               “name”:”dense_2″,

               “trainable”:true,

               “dtype”:”float32″,

               “units”:8,

               “activation”:”relu”,

               “use_bias”:true,

               “kernel_initializer”:{  

                  “class_name”:”VarianceScaling”,

                  “config”:{  

                     “scale”:1.0,

                     “mode”:”fan_avg”,

                     “distribution”:”uniform”,

                     “seed”:null

                  }

               },

               “bias_initializer”:{  

                  “class_name”:”Zeros”,

                  “config”:{  

 

                  }

               },

               “kernel_regularizer”:null,

               “bias_regularizer”:null,

               “activity_regularizer”:null,

               “kernel_constraint”:null,

               “bias_constraint”:null

            }

         },

         {  

            “class_name”:”Dense”,

            “config”:{  

               “name”:”dense_3″,

               “trainable”:true,

               “dtype”:”float32″,

               “units”:1,

               “activation”:”sigmoid”,

               “use_bias”:true,

               “kernel_initializer”:{  

                  “class_name”:”VarianceScaling”,

                  “config”:{  

                     “scale”:1.0,

                     “mode”:”fan_avg”,

                     “distribution”:”uniform”,

                     “seed”:null

                  }

               },

               “bias_initializer”:{  

                  “class_name”:”Zeros”,

                  “config”:{  

 

                  }

               },

               “kernel_regularizer”:null,

               “bias_regularizer”:null,

               “activity_regularizer”:null,

               “kernel_constraint”:null,

               “bias_constraint”:null

            }

         }

      ]

   },

   “keras_version”:”2.2.5″,

   “backend”:”tensorflow”

}

Save Your Neural Network Model to YAML

This example is much the same as the above JSON example, except the YAML format is used for the model specification.

Note, this example assumes that you have PyYAML 5 installed, for example:

In this example, the model is described using YAML, saved to file model.yaml and later loaded into a new model via the model_from_yaml() function.

Weights are handled in the same way as above in HDF5 format as model.h5.

# MLP for Pima Indians Dataset serialize to YAML and HDF5
from keras.models import Sequential
from keras.layers import Dense
from keras.models import model_from_yaml
import numpy
import os
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load pima indians dataset
dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# create model
model = Sequential()
model.add(Dense(12, input_dim=8, activation=’relu’))
model.add(Dense(8, activation=’relu’))
model.add(Dense(1, activation=’sigmoid’))
# Compile model
model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print(“%s: %.2f%%” % (model.metrics_names[1], scores[1]*100))

# serialize model to YAML
model_yaml = model.to_yaml()
with open(“model.yaml”, “w”) as yaml_file:
yaml_file.write(model_yaml)
# serialize weights to HDF5
model.save_weights(“model.h5”)
print(“Saved model to disk”)

# later…

# load YAML and create model
yaml_file = open(‘model.yaml’, ‘r’)
loaded_model_yaml = yaml_file.read()
yaml_file.close()
loaded_model = model_from_yaml(loaded_model_yaml)
# load weights into new model
loaded_model.load_weights(“model.h5”)
print(“Loaded model from disk”)

# evaluate loaded model on test data
loaded_model.compile(loss=’binary_crossentropy’, optimizer=’rmsprop’, metrics=[‘accuracy’])
score = loaded_model.evaluate(X, Y, verbose=0)
print(“%s: %.2f%%” % (loaded_model.metrics_names[1], score[1]*100))

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

# MLP for Pima Indians Dataset serialize to YAML and HDF5

from keras.models import Sequential

from keras.layers import Dense

from keras.models import model_from_yaml

import numpy

import os

# fix random seed for reproducibility

seed = 7

numpy.random.seed(seed)

# load pima indians dataset

dataset = numpy.loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)

# split into input (X) and output (Y) variables

X = dataset[:,0:8]

Y = dataset[:,8]

# create model

model = Sequential()

model.add(Dense(12, input_dim=8, activation=’relu’))

model.add(Dense(8, activation=’relu’))

model.add(Dense(1, activation=’sigmoid’))

# Compile model

model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])

# Fit the model

model.fit(X, Y, epochs=150, batch_size=10, verbose=0)

# evaluate the model

scores = model.evaluate(X, Y, verbose=0)

print(“%s: %.2f%%” % (model.metrics_names[1], scores[1]*100))

 

# serialize model to YAML

model_yaml = model.to_yaml()

with open(“model.yaml”, “w”) as yaml_file:

    yaml_file.write(model_yaml)

# serialize weights to HDF5

model.save_weights(“model.h5”)

print(“Saved model to disk”)

 

# later…

 

# load YAML and create model

yaml_file = open(‘model.yaml’, ‘r’)

loaded_model_yaml = yaml_file.read()

yaml_file.close()

loaded_model = model_from_yaml(loaded_model_yaml)

# load weights into new model

loaded_model.load_weights(“model.h5”)

print(“Loaded model from disk”)

 

# evaluate loaded model on test data

loaded_model.compile(loss=’binary_crossentropy’, optimizer=’rmsprop’, metrics=[‘accuracy’])

score = loaded_model.evaluate(X, Y, verbose=0)

print(“%s: %.2f%%” % (loaded_model.metrics_names[1], score[1]*100))

Running the example displays the following output:

acc: 78.78%
Saved model to disk
Loaded model from disk
acc: 78.78%

acc: 78.78%

Saved model to disk

Loaded model from disk

acc: 78.78%

The model described in YAML format looks like the following:

backend: tensorflow
class_name: Sequential
config:
layers:
– class_name: Dense
config:
activation: relu
activity_regularizer: null
batch_input_shape: !!python/tuple
– null
– 8
bias_constraint: null
bias_initializer:
class_name: Zeros
config: {}
bias_regularizer: null
dtype: float32
kernel_constraint: null
kernel_initializer:
class_name: VarianceScaling
config:
distribution: uniform
mode: fan_avg
scale: 1.0
seed: null
kernel_regularizer: null
name: dense_1
trainable: true
units: 12
use_bias: true
– class_name: Dense
config:
activation: relu
activity_regularizer: null
bias_constraint: null
bias_initializer:
class_name: Zeros
config: {}
bias_regularizer: null
dtype: float32
kernel_constraint: null
kernel_initializer:
class_name: VarianceScaling
config:
distribution: uniform
mode: fan_avg
scale: 1.0
seed: null
kernel_regularizer: null
name: dense_2
trainable: true
units: 8
use_bias: true
– class_name: Dense
config:
activation: sigmoid
activity_regularizer: null
bias_constraint: null
bias_initializer:
class_name: Zeros
config: {}
bias_regularizer: null
dtype: float32
kernel_constraint: null
kernel_initializer:
class_name: VarianceScaling
config:
distribution: uniform
mode: fan_avg
scale: 1.0
seed: null
kernel_regularizer: null
name: dense_3
trainable: true
units: 1
use_bias: true
name: sequential_1
keras_version: 2.2.5

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

backend: tensorflow

class_name: Sequential

config:

  layers:

  – class_name: Dense

    config:

      activation: relu

      activity_regularizer: null

      batch_input_shape: !!python/tuple

      – null

      – 8

      bias_constraint: null

      bias_initializer:

        class_name: Zeros

        config: {}

      bias_regularizer: null

      dtype: float32

      kernel_constraint: null

      kernel_initializer:

        class_name: VarianceScaling

        config:

          distribution: uniform

          mode: fan_avg

          scale: 1.0

          seed: null

      kernel_regularizer: null

      name: dense_1

      trainable: true

      units: 12

      use_bias: true

  – class_name: Dense

    config:

      activation: relu

      activity_regularizer: null

      bias_constraint: null

      bias_initializer:

        class_name: Zeros

        config: {}

      bias_regularizer: null

      dtype: float32

      kernel_constraint: null

      kernel_initializer:

        class_name: VarianceScaling

        config:

          distribution: uniform

          mode: fan_avg

          scale: 1.0

          seed: null

      kernel_regularizer: null

      name: dense_2

      trainable: true

      units: 8

      use_bias: true

  – class_name: Dense

    config:

      activation: sigmoid

      activity_regularizer: null

      bias_constraint: null

      bias_initializer:

        class_name: Zeros

        config: {}

      bias_regularizer: null

      dtype: float32

      kernel_constraint: null

      kernel_initializer:

        class_name: VarianceScaling

        config:

          distribution: uniform

          mode: fan_avg

          scale: 1.0

          seed: null

      kernel_regularizer: null

      name: dense_3

      trainable: true

      units: 1

      use_bias: true

  name: sequential_1

keras_version: 2.2.5

Save Model Weights and Architecture Together

Keras also supports a simpler interface to save both the model weights and model architecture together into a single H5 file.

Saving the model in this way includes everything we need to know about the model, including:

  • Model weights.
  • Model architecture.
  • Model compilation details (loss and metrics).
  • Model optimizer state.

This means that we can load and use the model directly, without having to re-compile it as we did in the examples above.

Note: this is the preferred way for saving and loading your Keras model.

How to Save a Keras Model

You can save your model by calling the save() function on the model and specifying the filename.

The example below demonstrates this by first fitting a model, evaluating it and saving it to the file model.h5.

# MLP for Pima Indians Dataset saved to single file
from numpy import loadtxt
from keras.models import Sequential
from keras.layers import Dense
# load pima indians dataset
dataset = loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# define model
model = Sequential()
model.add(Dense(12, input_dim=8, activation=’relu’))
model.add(Dense(8, activation=’relu’))
model.add(Dense(1, activation=’sigmoid’))
# compile model
model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])
# Fit the model
model.fit(X, Y, epochs=150, batch_size=10, verbose=0)
# evaluate the model
scores = model.evaluate(X, Y, verbose=0)
print(“%s: %.2f%%” % (model.metrics_names[1], scores[1]*100))
# save model and architecture to single file
model.save(“model.h5”)
print(“Saved model to disk”)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

# MLP for Pima Indians Dataset saved to single file

from numpy import loadtxt

from keras.models import Sequential

from keras.layers import Dense

# load pima indians dataset

dataset = loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)

# split into input (X) and output (Y) variables

X = dataset[:,0:8]

Y = dataset[:,8]

# define model

model = Sequential()

model.add(Dense(12, input_dim=8, activation=’relu’))

model.add(Dense(8, activation=’relu’))

model.add(Dense(1, activation=’sigmoid’))

# compile model

model.compile(loss=’binary_crossentropy’, optimizer=’adam’, metrics=[‘accuracy’])

# Fit the model

model.fit(X, Y, epochs=150, batch_size=10, verbose=0)

# evaluate the model

scores = model.evaluate(X, Y, verbose=0)

print(“%s: %.2f%%” % (model.metrics_names[1], scores[1]*100))

# save model and architecture to single file

model.save(“model.h5”)

print(“Saved model to disk”)

Running the example fits the model, summarizes the models performance on the training dataset and saves the model to file.

acc: 77.73%
Saved model to disk

acc: 77.73%

Saved model to disk

We can later load this model from file and use it.

How to Load a Keras Model

Your saved model can then be loaded later by calling the load_model() function and passing the filename. The function returns the model with the same architecture and weights.

In this case, we load the model, summarize the architecture and evaluate it on the same dataset to confirm the weights and architecture are the same.

# load and evaluate a saved model
from numpy import loadtxt
from keras.models import load_model

# load model
model = load_model(‘model.h5’)
# summarize model.
model.summary()
# load dataset
dataset = loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
# evaluate the model
score = model.evaluate(X, Y, verbose=0)
print(“%s: %.2f%%” % (model.metrics_names[1], score[1]*100))

# load and evaluate a saved model

from numpy import loadtxt

from keras.models import load_model

 

# load model

model = load_model(‘model.h5’)

# summarize model.

model.summary()

# load dataset

dataset = loadtxt(“pima-indians-diabetes.csv”, delimiter=”,”)

# split into input (X) and output (Y) variables

X = dataset[:,0:8]

Y = dataset[:,8]

# evaluate the model

score = model.evaluate(X, Y, verbose=0)

print(“%s: %.2f%%” % (model.metrics_names[1], score[1]*100))

Running the example first loads the model, prints a summary of the model architecture then evaluates the loaded model on the same dataset.

The model achieves the same accuracy score which in this case is 77%.

_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 12) 108
_________________________________________________________________
dense_2 (Dense) (None, 8) 104
_________________________________________________________________
dense_3 (Dense) (None, 1) 9
=================================================================
Total params: 221
Trainable params: 221
Non-trainable params: 0
_________________________________________________________________

acc: 77.73%

_________________________________________________________________

Layer (type)                 Output Shape              Param #  

=================================================================

dense_1 (Dense)              (None, 12)                108      

_________________________________________________________________

dense_2 (Dense)              (None, 8)                 104      

_________________________________________________________________

dense_3 (Dense)              (None, 1)                 9        

=================================================================

Total params: 221

Trainable params: 221

Non-trainable params: 0

_________________________________________________________________

 

acc: 77.73%

Further Reading

Summary

In this post, you discovered how to serialize your Keras deep learning models.

You learned how you can save your trained models to files and later load them up and use them to make predictions.

You also learned that model weights are easily stored using  HDF5 format and that the network structure can be saved in either JSON or YAML format.

Do you have any questions about saving your deep learning models or about this post?
Ask your questions in the comments and I will do my best to answer them.

Develop Deep Learning Projects with Python!

Deep Learning with Python

 What If You Could Develop A Network in Minutes

…with just a few lines of Python

Discover how in my new Ebook:
Deep Learning With Python

It covers end-to-end projects on topics like:
Multilayer Perceptrons, Convolutional Nets and Recurrent Neural Nets, and more…

Finally Bring Deep Learning To
Your Own Projects

Skip the Academics. Just Results.

See What’s Inside

error: Content is protected !!