Last Updated on August 9, 2019
Calculating the length or magnitude of vectors is often required either directly as a regularization method in machine learning, or as part of broader vector or matrix operations.
In this tutorial, you will discover the different ways to calculate vector lengths or magnitudes, called the vector norm.
After completing this tutorial, you will know:
- The L1 norm that is calculated as the sum of the absolute values of the vector.
- The L2 norm that is calculated as the square root of the sum of the squared vector values.
- The max norm that is calculated as the maximum vector values.
Discover vectors, matrices, tensors, matrix types, matrix factorization, PCA, SVD and much more in my new book, with 19 step-by-step tutorials and full source code.
Let’s get started.
- Update Mar/2018: Fixed typo in max norm equation.
- Update Sept/2018: Fixed typo related to the size of the vectors defined.
What You Will Learn
Tutorial Overview
This tutorial is divided into 4 parts; they are:
- Vector Norm
- Vector L1 Norm
- Vector L2 Norm
- Vector Max Norm
Need help with Linear Algebra for Machine Learning?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Download Your FREE Mini-Course
Vector Norm
Calculating the size or length of a vector is often required either directly or as part of a broader vector or vector-matrix operation.
The length of the vector is referred to as the vector norm or the vector’s magnitude.
The length of a vector is a nonnegative number that describes the extent of the vector in space, and is sometimes referred to as the vector’s magnitude or the norm.
— Page 112, No Bullshit Guide To Linear Algebra, 2017
The length of the vector is always a positive number, except for a vector of all zero values. It is calculated using some measure that summarizes the distance of the vector from the origin of the vector space. For example, the origin of a vector space for a vector with 3 elements is (0, 0, 0).
Notations are used to represent the vector norm in broader calculations and the type of vector norm calculation almost always has its own unique notation.
We will take a look at a few common vector norm calculations used in machine learning.
Vector L1 Norm
The length of a vector can be calculated using the L1 norm, where the 1 is a superscript of the L, e.g. L^1.
The notation for the L1 norm of a vector is ||v||1, where 1 is a subscript. As such, this length is sometimes called the taxicab norm or the Manhattan norm.
The L1 norm is calculated as the sum of the absolute vector values, where the absolute value of a scalar uses the notation |a1|. In effect, the norm is a calculation of the Manhattan distance from the origin of the vector space.
||v||1 = |a1| + |a2| + |a3|
||v||1 = |a1| + |a2| + |a3|
The L1 norm of a vector can be calculated in NumPy using the norm() function with a parameter to specify the norm order, in this case 1.
# l1 norm of a vector
from numpy import array
from numpy.linalg import norm
a = array([1, 2, 3])
print(a)
l1 = norm(a, 1)
print(l1)
# l1 norm of a vector
from numpy import array
from numpy.linalg import norm
a = array([1, 2, 3])
print(a)
l1 = norm(a, 1)
print(l1)
First, a 1×3 vector is defined, then the L1 norm of the vector is calculated.
Running the example first prints the defined vector and then the vector’s L1 norm.
The L1 norm is often used when fitting machine learning algorithms as a regularization method, e.g. a method to keep the coefficients of the model small, and in turn, the model less complex.
Vector L2 Norm
The length of a vector can be calculated using the L2 norm, where the 2 is a superscript of the L, e.g. L^2.
The notation for the L2 norm of a vector is ||v||2 where 2 is a subscript.
The L2 norm calculates the distance of the vector coordinate from the origin of the vector space. As such, it is also known as the Euclidean norm as it is calculated as the Euclidean distance from the origin. The result is a positive distance value.
The L2 norm is calculated as the square root of the sum of the squared vector values.
||v||2 = sqrt(a1^2 + a2^2 + a3^2)
||v||2 = sqrt(a1^2 + a2^2 + a3^2)
The L2 norm of a vector can be calculated in NumPy using the norm() function with default parameters.
# l2 norm of a vector
from numpy import array
from numpy.linalg import norm
a = array([1, 2, 3])
print(a)
l2 = norm(a)
print(l2)
# l2 norm of a vector
from numpy import array
from numpy.linalg import norm
a = array([1, 2, 3])
print(a)
l2 = norm(a)
print(l2)
First, a 1×3 vector is defined, then the L2 norm of the vector is calculated.
Running the example first prints the defined vector and then the vector’s L2 norm.
Like the L1 norm, the L2 norm is often used when fitting machine learning algorithms as a regularization method, e.g. a method to keep the coefficients of the model small and, in turn, the model less complex.
By far, the L2 norm is more commonly used than other vector norms in machine learning.
Vector Max Norm
The length of a vector can be calculated using the maximum norm, also called max norm.
Max norm of a vector is referred to as L^inf where inf is a superscript and can be represented with the infinity symbol. The notation for max norm is ||x||inf, where inf is a subscript.
The max norm is calculated as returning the maximum value of the vector, hence the name.
||v||inf = max(|a1|, |a2|, |a3|)
||v||inf = max(|a1|, |a2|, |a3|)
The max norm of a vector can be calculated in NumPy using the norm() function with the order parameter set to inf.
# max norm of a vector
from numpy import inf
from numpy import array
from numpy.linalg import norm
a = array([1, 2, 3])
print(a)
maxnorm = norm(a, inf)
print(maxnorm)
# max norm of a vector
from numpy import inf
from numpy import array
from numpy.linalg import norm
a = array([1, 2, 3])
print(a)
maxnorm = norm(a, inf)
print(maxnorm)
First, a 1×3 vector is defined, then the max norm of the vector is calculated.
Running the example first prints the defined vector and then the vector’s max norm.
Max norm is also used as a regularization in machine learning, such as on neural network weights, called max norm regularization.
Extensions
This section lists some ideas for extending the tutorial that you may wish to explore.
- Create 5 examples using each operation using your own data.
- Implement each matrix operation manually for matrices defined as lists of lists.
- Search machine learning papers and find 1 example of each operation being used.
If you explore any of these extensions, I’d love to know.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Books
API
Articles
Summary
In this tutorial, you discovered the different ways to calculate vector lengths or magnitudes, called the vector norm.
Specifically, you learned:
- The L1 norm that is calculated as the sum of the absolute values of the vector.
- The L2 norm that is calculated as the square root of the sum of the squared vector values.
- The max norm that is calculated as the maximum vector values.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Get a Handle on Linear Algebra for Machine Learning!
Develop a working understand of linear algebra
…by writing lines of code in python
Discover how in my new Ebook:
Linear Algebra for Machine Learning
It provides self-study tutorials on topics like:
Vector Norms, Matrix Multiplication, Tensors, Eigendecomposition, SVD, PCA and much more…
Finally Understand the Mathematics of Data
Skip the Academics. Just Results.
See What’s Inside