Last Updated on October 3, 2019
Batch normalization is a technique designed to automatically standardize the inputs to a layer in a deep learning neural network.
Once implemented, batch normalization has the effect of dramatically accelerating the training process of a neural network, and in some cases improves the performance of the model via a modest regularization effect.
In this tutorial, you will discover how to use batch normalization to accelerate the training of deep learning neural networks in Python with Keras.
After completing this tutorial, you will know:
- How to create and configure a BatchNormalization layer using the Keras API.
- How to add the BatchNormalization layer to deep learning neural network models.
- How to update an MLP model to use batch normalization to accelerate training on a binary classification problem.
Discover how to train faster, reduce overfitting, and make better predictions with deep learning models in my new book, with 26 step-by-step tutorials and full source code.
Let’s get started.
- Updated Oct/2019: Updated for Keras 2.3 and TensorFlow 2.0.
What You Will Learn
Tutorial Overview
This tutorial is divided into three parts; they are:
- BatchNormalization in Keras
- BatchNormalization in Models
- BatchNormalization Case Study
BatchNormalization in Keras
Keras provides support for batch normalization via the BatchNormalization layer.
For example:
bn = BatchNormalization()
bn = BatchNormalization()
The layer will transform inputs so that they are standardized, meaning that they will have a mean of zero and a standard deviation of one.
During training, the layer will keep track of statistics for each input variable and use them to standardize the data.
Further, the standardized output can be scaled using the learned parameters of Beta and Gamma that define the new mean and standard deviation for the output of the transform. The layer can be configured to control whether these additional parameters will be used or not via the “center” and “scale” attributes respectively. By default, they are enabled.
The statistics used to perform the standardization, e.g. the mean and standard deviation of each variable, are updated for each mini batch and a running average is maintained.
A “momentum” argument allows you to control how much of the statistics from the previous mini batch to include when the update is calculated. By default, this is kept high with a value of 0.99. This can be set to 0.0 to only use statistics from the current mini-batch, as described in the original paper.
bn = BatchNormalization(momentum=0.0)
bn = BatchNormalization(momentum=0.0)
At the end of training, the mean and standard deviation statistics in the layer at that time will be used to standardize inputs when the model is used to make a prediction.
The default configuration estimating mean and standard deviation across all mini batches is probably sensible.
Want Better Results with Deep Learning?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Download Your FREE Mini-Course
BatchNormalization in Models
Batch normalization can be used at most points in a model and with most types of deep learning neural networks.
Input and Hidden Layer Inputs
The BatchNormalization layer can be added to your model to standardize raw input variables or the outputs of a hidden layer.
Batch normalization is not recommended as an alternative to proper data preparation for your model.
Nevertheless, when used to standardize the raw input variables, the layer must specify the input_shape argument; for example:
…
model = Sequential
model.add(BatchNormalization(input_shape=(2,)))
…
…
model = Sequential
model.add(BatchNormalization(input_shape=(2,)))
…
When used to standardize the outputs of a hidden layer, the layer can be added to the model just like any other layer.
…
model = Sequential
…
model.add(BatchNormalization())
…
…
model = Sequential
…
model.add(BatchNormalization())
…
Use Before or After the Activation Function
The BatchNormalization normalization layer can be used to standardize inputs before or after the activation function of the previous layer.
The original paper that introduced the method suggests adding batch normalization before the activation function of the previous layer, for example:
…
model = Sequential
model.add(Dense(32))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
…
…
model = Sequential
model.add(Dense(32))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
…
Some reported experiments suggest better performance when adding the batch normalization layer after the activation function of the previous layer; for example:
…
model = Sequential
model.add(Dense(32, activation=’relu’))
model.add(BatchNormalization())
…
…
model = Sequential
model.add(Dense(32, activation=’relu’))
model.add(BatchNormalization())
…
If time and resources permit, it may be worth testing both approaches on your model and use the approach that results in the best performance.
Let’s take a look at how batch normalization can be used with some common network types.
MLP Batch Normalization
The example below adds batch normalization after the activation function between Dense hidden layers.
# example of batch normalization for an mlp
from keras.layers import Dense
from keras.layers import BatchNormalization
…
model.add(Dense(32, activation=’relu’))
model.add(BatchNormalization())
model.add(Dense(1))
…
# example of batch normalization for an mlp
from keras.layers import Dense
from keras.layers import BatchNormalization
…
model.add(Dense(32, activation=’relu’))
model.add(BatchNormalization())
model.add(Dense(1))
…
CNN Batch Normalization
The example below adds batch normalization after the activation function between a convolutional and max pooling layers.
# example of batch normalization for an cnn
from keras.layers import Dense
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import BatchNormalization
…
model.add(Conv2D(32, (3,3), activation=’relu’))
model.add(Conv2D(32, (3,3), activation=’relu’))
model.add(BatchNormalization())
model.add(MaxPooling2D())
model.add(Dense(1))
…
# example of batch normalization for an cnn
from keras.layers import Dense
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import BatchNormalization
…
model.add(Conv2D(32, (3,3), activation=’relu’))
model.add(Conv2D(32, (3,3), activation=’relu’))
model.add(BatchNormalization())
model.add(MaxPooling2D())
model.add(Dense(1))
…
RNN Batch Normalization
The example below adds batch normalization after the activation function between an LSTM and Dense hidden layers.
# example of batch normalization for a lstm
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import BatchNormalization
…
model.add(LSTM(32))
model.add(BatchNormalization())
model.add(Dense(1))
…
# example of batch normalization for a lstm
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import BatchNormalization
…
model.add(LSTM(32))
model.add(BatchNormalization())
model.add(Dense(1))
…
BatchNormalization Case Study
In this section, we will demonstrate how to use batch normalization to accelerate the training of an MLP on a simple binary classification problem.
This example provides a template for applying batch normalization to your own neural network for classification and regression problems.
Binary Classification Problem
We will use a standard binary classification problem that defines two two-dimensional concentric circles of observations, one circle for each class.
Each observation has two input variables with the same scale and a class output value of either 0 or 1. This dataset is called the “circles” dataset because of the shape of the observations in each class when plotted.
We can use the make_circles() function to generate observations from this problem. We will add noise to the data and seed the random number generator so that the same samples are generated each time the code is run.
# generate 2d classification dataset
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
# generate 2d classification dataset
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
We can plot the dataset where the two variables are taken as x and y coordinates on a graph and the class value is taken as the color of the observation.
The complete example of generating the dataset and plotting it is listed below.
# scatter plot of the circles dataset with points colored by class
from sklearn.datasets import make_circles
from numpy import where
from matplotlib import pyplot
# generate circles
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
# select indices of points with each class label
for i in range(2):
samples_ix = where(y == i)
pyplot.scatter(X[samples_ix, 0], X[samples_ix, 1], label=str(i))
pyplot.legend()
pyplot.show()
# scatter plot of the circles dataset with points colored by class
from sklearn.datasets import make_circles
from numpy import where
from matplotlib import pyplot
# generate circles
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
# select indices of points with each class label
for i in range(2):
samples_ix = where(y == i)
pyplot.scatter(X[samples_ix, 0], X[samples_ix, 1], label=str(i))
pyplot.legend()
pyplot.show()
Running the example creates a scatter plot showing the concentric circles shape of the observations in each class.
We can see the noise in the dispersal of the points making the circles less obvious.
This is a good test problem because the classes cannot be separated by a line, e.g. are not linearly separable, requiring a nonlinear method such as a neural network to address.
Multilayer Perceptron Model
We can develop a Multilayer Perceptron model, or MLP, as a baseline for this problem.
First, we will split the 1,000 generated samples into a train and test dataset, with 500 examples in each. This will provide a sufficiently large sample for the model to learn from and an equally sized (fair) evaluation of its performance.
# split into train and test
n_train = 500
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
# split into train and test
n_train = 500
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
We will define a simple MLP model. The network must have two inputs in the visible layer for the two variables in the dataset.
The model will have a single hidden layer with 50 nodes, chosen arbitrarily, and use the rectified linear activation function (ReLU) and the He random weight initialization method. The output layer will be a single node with the sigmoid activation function, capable of predicting a 0 for the outer circle and a 1 for the inner circle of the problem.
The model will be trained using stochastic gradient descent with a modest learning rate of 0.01 and a large momentum of 0.9, and the optimization will be directed using the binary cross entropy loss function.
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, activation=’relu’, kernel_initializer=’he_uniform’))
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, activation=’relu’, kernel_initializer=’he_uniform’))
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
Once defined, the model can be fit on the training dataset.
We will use the holdout test dataset as a validation dataset and evaluate its performance at the end of each training epoch. The model will be fit for 100 epochs, chosen after a little trial and error.
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=100, verbose=0)
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=100, verbose=0)
At the end of the run, the model is evaluated on the train and test dataset and the accuracy is reported.
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print(‘Train: %.3f, Test: %.3f’ % (train_acc, test_acc))
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print(‘Train: %.3f, Test: %.3f’ % (train_acc, test_acc))
Finally, line plots are created showing model accuracy on the train and test sets at the end of each training epoch providing learning curves.
This plot of learning curves is useful as it gives an idea of how quickly and how well the model has learned the problem.
# plot history
pyplot.plot(history.history[‘accuracy’], label=’train’)
pyplot.plot(history.history[‘val_accuracy’], label=’test’)
pyplot.legend()
pyplot.show()
# plot history
pyplot.plot(history.history[‘accuracy’], label=’train’)
pyplot.plot(history.history[‘val_accuracy’], label=’test’)
pyplot.legend()
pyplot.show()
Tying these elements together, the complete example is listed below.
# mlp for the two circles problem
from sklearn.datasets import make_circles
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
from matplotlib import pyplot
# generate 2d classification dataset
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
# split into train and test
n_train = 500
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, activation=’relu’, kernel_initializer=’he_uniform’))
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=100, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print(‘Train: %.3f, Test: %.3f’ % (train_acc, test_acc))
# plot history
pyplot.plot(history.history[‘accuracy’], label=’train’)
pyplot.plot(history.history[‘val_accuracy’], label=’test’)
pyplot.legend()
pyplot.show()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# mlp for the two circles problem
from sklearn.datasets import make_circles
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
from matplotlib import pyplot
# generate 2d classification dataset
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
# split into train and test
n_train = 500
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, activation=’relu’, kernel_initializer=’he_uniform’))
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=100, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print(‘Train: %.3f, Test: %.3f’ % (train_acc, test_acc))
# plot history
pyplot.plot(history.history[‘accuracy’], label=’train’)
pyplot.plot(history.history[‘val_accuracy’], label=’test’)
pyplot.legend()
pyplot.show()
Running the example fits the model and evaluates it on the train and test sets.
Your specific results may vary given the stochastic nature of the learning algorithm. Consider re-running the example a number of times.
In this case, we can see that the model achieved an accuracy of about 84% on the holdout dataset and achieved comparable performance on both the train and test sets, given the same size and similar composition of both datasets.
Train: 0.838, Test: 0.846
Train: 0.838, Test: 0.846
A graph is created showing line plots of the classification accuracy on the train (blue) and test (orange) datasets.
The plot shows comparable performance of the model on both datasets during the training process. We can see that performance leaps up over the first 30-to-40 epochs to above 80% accuracy then is slowly refined.
This result, and specifically the dynamics of the model during training, provide a baseline that can be compared to the same model with the addition of batch normalization.
MLP With Batch Normalization
The model introduced in the previous section can be updated to add batch normalization.
The expectation is that the addition of batch normalization would accelerate the training process, offering similar or better classification accuracy of the model in fewer training epochs. Batch normalization is also reported as providing a modest form of regularization, meaning that it may also offer a small reduction in generalization error demonstrated by a small increase in classification accuracy on the holdout test dataset.
A new BatchNormalization layer can be added to the model after the hidden layer before the output layer. Specifically, after the activation function of the prior hidden layer.
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, activation=’relu’, kernel_initializer=’he_uniform’))
model.add(BatchNormalization())
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, activation=’relu’, kernel_initializer=’he_uniform’))
model.add(BatchNormalization())
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
The complete example with this modification is listed below.
# mlp for the two circles problem with batchnorm after activation function
from sklearn.datasets import make_circles
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import BatchNormalization
from keras.optimizers import SGD
from matplotlib import pyplot
# generate 2d classification dataset
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
# split into train and test
n_train = 500
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, activation=’relu’, kernel_initializer=’he_uniform’))
model.add(BatchNormalization())
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=100, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print(‘Train: %.3f, Test: %.3f’ % (train_acc, test_acc))
# plot history
pyplot.plot(history.history[‘accuracy’], label=’train’)
pyplot.plot(history.history[‘val_accuracy’], label=’test’)
pyplot.legend()
pyplot.show()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# mlp for the two circles problem with batchnorm after activation function
from sklearn.datasets import make_circles
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import BatchNormalization
from keras.optimizers import SGD
from matplotlib import pyplot
# generate 2d classification dataset
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
# split into train and test
n_train = 500
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, activation=’relu’, kernel_initializer=’he_uniform’))
model.add(BatchNormalization())
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=100, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print(‘Train: %.3f, Test: %.3f’ % (train_acc, test_acc))
# plot history
pyplot.plot(history.history[‘accuracy’], label=’train’)
pyplot.plot(history.history[‘val_accuracy’], label=’test’)
pyplot.legend()
pyplot.show()
Running the example first prints the classification accuracy of the model on the train and test dataset.
Your specific results may vary given the stochastic nature of the learning algorithm. Consider re-running the example a number of times.
In this case, we can see comparable performance of the model on both the train and test set of about 84% accuracy, very similar to what we saw in the previous section, if not a little bit better.
Train: 0.846, Test: 0.848
Train: 0.846, Test: 0.848
A graph of the learning curves is also created showing classification accuracy on both the train and test sets for each training epoch.
In this case, we can see that the model has learned the problem faster than the model in the previous section without batch normalization. Specifically, we can see that classification accuracy on the train and test datasets leaps above 80% within the first 20 epochs, as opposed to 30-to-40 epochs in the model without batch normalization.
The plot also shows the effect of batch normalization during training. We can see lower performance on the training dataset than the test dataset: scores on the training dataset that are lower than the performance of the model at the end of the training run. This is likely the effect of the input collected and updated each mini-batch.
We can also try a variation of the model where batch normalization is applied prior to the activation function of the hidden layer, instead of after the activation function.
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, kernel_initializer=’he_uniform’))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, kernel_initializer=’he_uniform’))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
The complete code listing with this change to the model is listed below.
# mlp for the two circles problem with batchnorm before activation function
from sklearn.datasets import make_circles
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Activation
from keras.layers import BatchNormalization
from keras.optimizers import SGD
from matplotlib import pyplot
# generate 2d classification dataset
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
# split into train and test
n_train = 500
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, kernel_initializer=’he_uniform’))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=100, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print(‘Train: %.3f, Test: %.3f’ % (train_acc, test_acc))
# plot history
pyplot.plot(history.history[‘accuracy’], label=’train’)
pyplot.plot(history.history[‘val_accuracy’], label=’test’)
pyplot.legend()
pyplot.show()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# mlp for the two circles problem with batchnorm before activation function
from sklearn.datasets import make_circles
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Activation
from keras.layers import BatchNormalization
from keras.optimizers import SGD
from matplotlib import pyplot
# generate 2d classification dataset
X, y = make_circles(n_samples=1000, noise=0.1, random_state=1)
# split into train and test
n_train = 500
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
# define model
model = Sequential()
model.add(Dense(50, input_dim=2, kernel_initializer=’he_uniform’))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
model.add(Dense(1, activation=’sigmoid’))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss=’binary_crossentropy’, optimizer=opt, metrics=[‘accuracy’])
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=100, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print(‘Train: %.3f, Test: %.3f’ % (train_acc, test_acc))
# plot history
pyplot.plot(history.history[‘accuracy’], label=’train’)
pyplot.plot(history.history[‘val_accuracy’], label=’test’)
pyplot.legend()
pyplot.show()
Running the example first prints the classification accuracy of the model on the train and test dataset.
Your specific results may vary given the stochastic nature of the learning algorithm. Consider re-running the example a number of times.
In this case, we can see comparable performance of the model on the train and test datasets, but slightly worse than the model without batch normalization.
Train: 0.826, Test: 0.830
Train: 0.826, Test: 0.830
The line plot of the learning curves on the train and test sets also tells a different story.
The plot shows the model learning perhaps at the same pace as the model without batch normalization, but the performance of the model on the training dataset is much worse, hovering around 70% to 75% accuracy, again likely an effect of the statistics collected and used over each mini-batch.
At least for this model configuration on this specific dataset, it appears that batch normalization is more effective after the rectified linear activation function.
Extensions
This section lists some ideas for extending the tutorial that you may wish to explore.
- Without Beta and Gamma. Update the example to not use the beta and gamma parameters in the batch normalization layer and compare results.
- Without Momentum. Update the example to not use momentum in the batch normalization layer during training and compare results.
- Input Layer. Update the example to use batch normalization after the input to the model and compare results.
If you explore any of these extensions, I’d love to know.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Papers
API
Articles
Summary
In this tutorial, you discovered how to use batch normalization to accelerate the training of deep learning neural networks in Python with Keras.
Specifically, you learned:
- How to create and configure a BatchNormalization layer using the Keras API.
- How to add the BatchNormalization layer to deep learning neural network models.
- How to update an MLP model to use batch normalization to accelerate training on a binary classification problem.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Develop Better Deep Learning Models Today!
Train Faster, Reduce Overftting, and Ensembles
…with just a few lines of python code
Discover how in my new Ebook:
Better Deep Learning
It provides self-study tutorials on topics like:
weight decay, batch normalization, dropout, model stacking and much more…
Bring better deep learning to your projects!
Skip the Academics. Just Results.
See What’s Inside