Skip to content
Search
Generic filters
Exact matches only

How to Scale Machine Learning Data From Scratch With Python

Last Updated on December 11, 2019

Many machine learning algorithms expect data to be scaled consistently.

There are two popular methods that you should consider when scaling your data for machine learning.

In this tutorial, you will discover how you can rescale your data for machine learning. After reading this tutorial you will know:

  • How to normalize your data from scratch.
  • How to standardize your data from scratch.
  • When to normalize as opposed to standardize data.

Discover how to code ML algorithms from scratch including kNN, decision trees, neural nets, ensembles and much more in my new book, with full Python code and no fancy libraries.

Let’s get started.

  • Update Feb/2018: Fixed minor typo in min/max code example.
  • Update Mar/2018: Added alternate link to download the dataset as the original appears to have been taken down.
  • Update Aug/2018: Tested and updated to work with Python 3.6.

How To Prepare Machine Learning Data From Scratch With Python

How To Prepare Machine Learning Data From Scratch With Python
Photo by Ondra Chotovinsky, some rights reserved.

Description

Many machine learning algorithms expect the scale of the input and even the output data to be equivalent.

It can help in methods that weight inputs in order to make a prediction, such as in linear regression and logistic regression.

It is practically required in methods that combine weighted inputs in complex ways such as in artificial neural networks and deep learning.

In this tutorial, we are going to practice rescaling one standard machine learning dataset in CSV format.

Specifically, the Pima Indians dataset. It contains 768 rows and 9 columns. All of the values in the file are numeric, specifically floating point values. We will learn how to load the file first, then later how to convert the loaded strings to numeric values.

Tutorial

This tutorial is divided into 3 parts:

  1. Normalize Data.
  2. Standardize Data.
  3. When to Normalize and Standardize.

These steps will provide the foundations you need to handle scaling your own data.

1. Normalize Data

Normalization can refer to different techniques depending on context.

Here, we use normalization to refer to rescaling an input variable to the range between 0 and 1.

Normalization requires that you know the minimum and maximum values for each attribute.

This can be estimated from training data or specified directly if you have deep knowledge of the problem domain.

You can easily estimate the minimum and maximum values for each attribute in a dataset by enumerating through the values.

The snippet of code below defines the dataset_minmax() function that calculates the min and max value for each attribute in a dataset, then returns an array of these minimum and maximum values.

# Find the min and max values for each column
def dataset_minmax(dataset):
minmax = list()
for i in range(len(dataset[0])):
col_values = [row[i] for row in dataset]
value_min = min(col_values)
value_max = max(col_values)
minmax.append([value_min, value_max])
return minmax

# Find the min and max values for each column

def dataset_minmax(dataset):

minmax = list()

for i in range(len(dataset[0])):

col_values = [row[i] for row in dataset]

value_min = min(col_values)

value_max = max(col_values)

minmax.append([value_min, value_max])

return minmax

We can contrive a small dataset for testing as follows:

With this contrived dataset, we can test our function for calculating the min and max for each column.

# Find the min and max values for each column
def dataset_minmax(dataset):
minmax = list()
for i in range(len(dataset[0])):
col_values = [row[i] for row in dataset]
value_min = min(col_values)
value_max = max(col_values)
minmax.append([value_min, value_max])
return minmax

# Contrive small dataset
dataset = [[50, 30], [20, 90]]
print(dataset)
# Calculate min and max for each column
minmax = dataset_minmax(dataset)
print(minmax)

# Find the min and max values for each column

def dataset_minmax(dataset):

minmax = list()

for i in range(len(dataset[0])):

col_values = [row[i] for row in dataset]

value_min = min(col_values)

value_max = max(col_values)

minmax.append([value_min, value_max])

return minmax

 

# Contrive small dataset

dataset = [[50, 30], [20, 90]]

print(dataset)

# Calculate min and max for each column

minmax = dataset_minmax(dataset)

print(minmax)

Running the example produces the following output.

First, the dataset is printed in a list of lists format, then the min and max for each column is printed in the format column1: min,max and column2: min,max.

For example:

[[50, 30], [20, 90]]
[[20, 50], [30, 90]]

[[50, 30], [20, 90]]

[[20, 50], [30, 90]]

Once we have estimates of the maximum and minimum allowed values for each column, we can now normalize the raw data to the range 0 and 1.

The calculation to normalize a single value for a column is:

scaled_value = (value – min) / (max – min)

scaled_value = (value – min) / (max – min)

Below is an implementation of this in a function called normalize_dataset() that normalizes values in each column of a provided dataset.

# Rescale dataset columns to the range 0-1
def normalize_dataset(dataset, minmax):
for row in dataset:
for i in range(len(row)):
row[i] = (row[i] – minmax[i][0]) / (minmax[i][1] – minmax[i][0])

# Rescale dataset columns to the range 0-1

def normalize_dataset(dataset, minmax):

for row in dataset:

for i in range(len(row)):

row[i] = (row[i] – minmax[i][0]) / (minmax[i][1] – minmax[i][0])

We can tie this function together with the dataset_minmax() function and normalize the contrived dataset.

# Find the min and max values for each column
def dataset_minmax(dataset):
minmax = list()
for i in range(len(dataset[0])):
col_values = [row[i] for row in dataset]
value_min = min(col_values)
value_max = max(col_values)
minmax.append([value_min, value_max])
return minmax

# Rescale dataset columns to the range 0-1
def normalize_dataset(dataset, minmax):
for row in dataset:
for i in range(len(row)):
row[i] = (row[i] – minmax[i][0]) / (minmax[i][1] – minmax[i][0])

# Contrive small dataset
dataset = [[50, 30], [20, 90]]
print(dataset)
# Calculate min and max for each column
minmax = dataset_minmax(dataset)
print(minmax)
# Normalize columns
normalize_dataset(dataset, minmax)
print(dataset)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

# Find the min and max values for each column

def dataset_minmax(dataset):

minmax = list()

for i in range(len(dataset[0])):

col_values = [row[i] for row in dataset]

value_min = min(col_values)

value_max = max(col_values)

minmax.append([value_min, value_max])

return minmax

 

# Rescale dataset columns to the range 0-1

def normalize_dataset(dataset, minmax):

for row in dataset:

for i in range(len(row)):

row[i] = (row[i] – minmax[i][0]) / (minmax[i][1] – minmax[i][0])

 

# Contrive small dataset

dataset = [[50, 30], [20, 90]]

print(dataset)

# Calculate min and max for each column

minmax = dataset_minmax(dataset)

print(minmax)

# Normalize columns

normalize_dataset(dataset, minmax)

print(dataset)

Running this example prints the output below, including the normalized dataset.

[[50, 30], [20, 90]]
[[20, 50], [30, 90]]
[[1, 0], [0, 1]]

[[50, 30], [20, 90]]

[[20, 50], [30, 90]]

[[1, 0], [0, 1]]

We can combine this code with code for loading a CSV dataset and load and normalize the Pima Indians diabetes dataset.

Download the Pima Indians dataset and place it in your current directory with the name pima-indians-diabetes.csv.

Open the file and delete any empty lines at the bottom.

The example first loads the dataset and converts the values for each column from string to floating point values. The minimum and maximum values for each column are estimated from the dataset, and finally, the values in the dataset are normalized.

from csv import reader

# Load a CSV file
def load_csv(filename):
file = open(filename, “rb”)
lines = reader(file)
dataset = list(lines)
return dataset

# Convert string column to float
def str_column_to_float(dataset, column):
for row in dataset:
row[column] = float(row[column].strip())

# Find the min and max values for each column
def dataset_minmax(dataset):
minmax = list()
for i in range(len(dataset[0])):
col_values = [row[i] for row in dataset]
value_min = min(col_values)
value_max = max(col_values)
minmax.append([value_min, value_max])
return minmax

# Rescale dataset columns to the range 0-1
def normalize_dataset(dataset, minmax):
for row in dataset:
for i in range(len(row)):
row[i] = (row[i] – minmax[i][0]) / (minmax[i][1] – minmax[i][0])

# Load pima-indians-diabetes dataset
filename = ‘pima-indians-diabetes.csv’
dataset = load_csv(filename)
print(‘Loaded data file {0} with {1} rows and {2} columns’).format(filename, len(dataset), len(dataset[0]))
# convert string columns to float
for i in range(len(dataset[0])):
str_column_to_float(dataset, i)
print(dataset[0])
# Calculate min and max for each column
minmax = dataset_minmax(dataset)
# Normalize columns
normalize_dataset(dataset, minmax)
print(dataset[0])

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

from csv import reader

 

# Load a CSV file

def load_csv(filename):

file = open(filename, “rb”)

lines = reader(file)

dataset = list(lines)

return dataset

 

# Convert string column to float

def str_column_to_float(dataset, column):

for row in dataset:

row[column] = float(row[column].strip())

 

# Find the min and max values for each column

def dataset_minmax(dataset):

minmax = list()

for i in range(len(dataset[0])):

col_values = [row[i] for row in dataset]

value_min = min(col_values)

value_max = max(col_values)

minmax.append([value_min, value_max])

return minmax

 

# Rescale dataset columns to the range 0-1

def normalize_dataset(dataset, minmax):

for row in dataset:

for i in range(len(row)):

row[i] = (row[i] – minmax[i][0]) / (minmax[i][1] – minmax[i][0])

 

# Load pima-indians-diabetes dataset

filename = ‘pima-indians-diabetes.csv’

dataset = load_csv(filename)

print(‘Loaded data file {0} with {1} rows and {2} columns’).format(filename, len(dataset), len(dataset[0]))

# convert string columns to float

for i in range(len(dataset[0])):

str_column_to_float(dataset, i)

print(dataset[0])

# Calculate min and max for each column

minmax = dataset_minmax(dataset)

# Normalize columns

normalize_dataset(dataset, minmax)

print(dataset[0])

Running the example produces the output below.

The first record from the dataset is printed before and after normalization, showing the effect of the scaling.

Loaded data file pima-indians-diabetes.csv with 768 rows and 9 columns
[6.0, 148.0, 72.0, 35.0, 0.0, 33.6, 0.627, 50.0, 1.0]
[0.35294117647058826, 0.7437185929648241, 0.5901639344262295, 0.35353535353535354, 0.0, 0.5007451564828614, 0.23441502988898377, 0.48333333333333334, 1.0]

Loaded data file pima-indians-diabetes.csv with 768 rows and 9 columns

[6.0, 148.0, 72.0, 35.0, 0.0, 33.6, 0.627, 50.0, 1.0]

[0.35294117647058826, 0.7437185929648241, 0.5901639344262295, 0.35353535353535354, 0.0, 0.5007451564828614, 0.23441502988898377, 0.48333333333333334, 1.0]

2. Standardize Data

Standardization is a rescaling technique that refers to centering the distribution of the data on the value 0 and the standard deviation to the value 1.

Together, the mean and the standard deviation can be used to summarize a normal distribution, also called the Gaussian distribution or bell curve.

It requires that the mean and standard deviation of the values for each column be known prior to scaling. As with normalizing above, we can estimate these values from training data, or use domain knowledge to specify their values.

Let’s start with creating functions to estimate the mean and standard deviation statistics for each column from a dataset.

The mean describes the middle or central tendency for a collection of numbers. The mean for a column is calculated as the sum of all values for a column divided by the total number of values.

mean = sum(values) / total_values

mean = sum(values) / total_values

The function below named column_means() calculates the mean values for each column in the dataset.

# calculate column means
def column_means(dataset):
means = [0 for i in range(len(dataset[0]))]
for i in range(len(dataset[0])):
col_values = [row[i] for row in dataset]
means[i] = sum(col_values) / float(len(dataset))
return means

# calculate column means

def column_means(dataset):

means = [0 for i in range(len(dataset[0]))]

for i in range(len(dataset[0])):

col_values = [row[i] for row in dataset]

means[i] = sum(col_values) / float(len(dataset))

return means

The standard deviation describes the average spread of values from the mean. It can be calculated as the square root of the sum of the squared difference between each value and the mean and dividing by the number of values minus 1.

standard deviation = sqrt( (value_i – mean)^2 / (total_values-1))

standard deviation = sqrt( (value_i – mean)^2 / (total_values-1))

The function below named column_stdevs() calculates the standard deviation of values for each column in the dataset and assumes the means have already been calculated.

# calculate column standard deviations
def column_stdevs(dataset, means):
stdevs = [0 for i in range(len(dataset[0]))]
for i in range(len(dataset[0])):
variance = [pow(row[i]-means[i], 2) for row in dataset]
stdevs[i] = sum(variance)
stdevs = [sqrt(x/(float(len(dataset)-1))) for x in stdevs]
return stdevs

# calculate column standard deviations

def column_stdevs(dataset, means):

stdevs = [0 for i in range(len(dataset[0]))]

for i in range(len(dataset[0])):

variance = [pow(row[i]-means[i], 2) for row in dataset]

stdevs[i] = sum(variance)

stdevs = [sqrt(x/(float(len(dataset)-1))) for x in stdevs]

return stdevs

Again, we can contrive a small dataset to demonstrate the estimate of the mean and standard deviation from a dataset.

Using an excel spreadsheet, we can estimate the mean and standard deviation for each column as follows:

x1 x2
mean 33.3 56.6
stdev 15.27 30.55

x1 x2

mean 33.3 56.6

stdev 15.27 30.55

Using the contrived dataset, we can estimate the summary statistics.

from math import sqrt

# calculate column means
def column_means(dataset):
means = [0 for i in range(len(dataset[0]))]
for i in range(len(dataset[0])):
col_values = [row[i] for row in dataset]
means[i] = sum(col_values) / float(len(dataset))
return means

# calculate column standard deviations
def column_stdevs(dataset, means):
stdevs = [0 for i in range(len(dataset[0]))]
for i in range(len(dataset[0])):
variance = [pow(row[i]-means[i], 2) for row in dataset]
stdevs[i] = sum(variance)
stdevs = [sqrt(x/(float(len(dataset)-1))) for x in stdevs]
return stdevs

# Standardize dataset
dataset = [[50, 30], [20, 90], [30, 50]]
print(dataset)
# Estimate mean and standard deviation
means = column_means(dataset)
stdevs = column_stdevs(dataset, means)
print(means)
print(stdevs)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

from math import sqrt

 

# calculate column means

def column_means(dataset):

means = [0 for i in range(len(dataset[0]))]

for i in range(len(dataset[0])):

col_values = [row[i] for row in dataset]

means[i] = sum(col_values) / float(len(dataset))

return means

 

# calculate column standard deviations

def column_stdevs(dataset, means):

stdevs = [0 for i in range(len(dataset[0]))]

for i in range(len(dataset[0])):

variance = [pow(row[i]-means[i], 2) for row in dataset]

stdevs[i] = sum(variance)

stdevs = [sqrt(x/(float(len(dataset)-1))) for x in stdevs]

return stdevs

 

# Standardize dataset

dataset = [[50, 30], [20, 90], [30, 50]]

print(dataset)

# Estimate mean and standard deviation

means = column_means(dataset)

stdevs = column_stdevs(dataset, means)

print(means)

print(stdevs)

Executing the example provides the following output, matching the numbers calculated in the spreadsheet.

[[50, 30], [20, 90], [30, 50]]
[33.333333333333336, 56.666666666666664]
[15.275252316519467, 30.550504633038933]

[[50, 30], [20, 90], [30, 50]]

[33.333333333333336, 56.666666666666664]

[15.275252316519467, 30.550504633038933]

Once the summary statistics are calculated, we can easily standardize the values in each column.

The calculation to standardize a given value is as follows:

standardized_value = (value – mean) / stdev

standardized_value = (value – mean) / stdev

Below is a function named standardize_dataset() that implements this equation

# standardize dataset
def standardize_dataset(dataset, means, stdevs):
for row in dataset:
for i in range(len(row)):
row[i] = (row[i] – means[i]) / stdevs[i]

# standardize dataset

def standardize_dataset(dataset, means, stdevs):

for row in dataset:

for i in range(len(row)):

row[i] = (row[i] – means[i]) / stdevs[i]

Combining this with the functions to estimate the mean and standard deviation summary statistics, we can standardize our contrived dataset.

from math import sqrt

# calculate column means
def column_means(dataset):
means = [0 for i in range(len(dataset[0]))]
for i in range(len(dataset[0])):
col_values = [row[i] for row in dataset]
means[i] = sum(col_values) / float(len(dataset))
return means

# calculate column standard deviations
def column_stdevs(dataset, means):
stdevs = [0 for i in range(len(dataset[0]))]
for i in range(len(dataset[0])):
variance = [pow(row[i]-means[i], 2) for row in dataset]
stdevs[i] = sum(variance)
stdevs = [sqrt(x/(float(len(dataset)-1))) for x in stdevs]
return stdevs

# standardize dataset
def standardize_dataset(dataset, means, stdevs):
for row in dataset:
for i in range(len(row)):
row[i] = (row[i] – means[i]) / stdevs[i]

# Standardize dataset
dataset = [[50, 30], [20, 90], [30, 50]]
print(dataset)
# Estimate mean and standard deviation
means = column_means(dataset)
stdevs = column_stdevs(dataset, means)
print(means)
print(stdevs)
# standardize dataset
standardize_dataset(dataset, means, stdevs)
print(dataset)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

from math import sqrt

 

# calculate column means

def column_means(dataset):

means = [0 for i in range(len(dataset[0]))]

for i in range(len(dataset[0])):

col_values = [row[i] for row in dataset]

means[i] = sum(col_values) / float(len(dataset))

return means

 

# calculate column standard deviations

def column_stdevs(dataset, means):

stdevs = [0 for i in range(len(dataset[0]))]

for i in range(len(dataset[0])):

variance = [pow(row[i]-means[i], 2) for row in dataset]

stdevs[i] = sum(variance)

stdevs = [sqrt(x/(float(len(dataset)-1))) for x in stdevs]

return stdevs

 

# standardize dataset

def standardize_dataset(dataset, means, stdevs):

for row in dataset:

for i in range(len(row)):

row[i] = (row[i] – means[i]) / stdevs[i]

 

# Standardize dataset

dataset = [[50, 30], [20, 90], [30, 50]]

print(dataset)

# Estimate mean and standard deviation

means = column_means(dataset)

stdevs = column_stdevs(dataset, means)

print(means)

print(stdevs)

# standardize dataset

standardize_dataset(dataset, means, stdevs)

print(dataset)

Executing this example produces the following output, showing standardized values for the contrived dataset.

[[50, 30], [20, 90], [30, 50]]
[33.333333333333336, 56.666666666666664]
[15.275252316519467, 30.550504633038933]
[[1.0910894511799618, -0.8728715609439694], [-0.8728715609439697, 1.091089451179962], [-0.21821789023599253, -0.2182178902359923]]

[[50, 30], [20, 90], [30, 50]]

[33.333333333333336, 56.666666666666664]

[15.275252316519467, 30.550504633038933]

[[1.0910894511799618, -0.8728715609439694], [-0.8728715609439697, 1.091089451179962], [-0.21821789023599253, -0.2182178902359923]]

Again, we can demonstrate the standardization of a machine learning dataset.

The example below demonstrate how to load and standardize the Pima Indians diabetes dataset, assumed to be in the current working directory as in the previous normalization example.

from csv import reader
from math import sqrt

# Load a CSV file
def load_csv(filename):
file = open(filename, “rb”)
lines = reader(file)
dataset = list(lines)
return dataset

# Convert string column to float
def str_column_to_float(dataset, column):
for row in dataset:
row[column] = float(row[column].strip())

# calculate column means
def column_means(dataset):
means = [0 for i in range(len(dataset[0]))]
for i in range(len(dataset[0])):
col_values = [row[i] for row in dataset]
means[i] = sum(col_values) / float(len(dataset))
return means

# calculate column standard deviations
def column_stdevs(dataset, means):
stdevs = [0 for i in range(len(dataset[0]))]
for i in range(len(dataset[0])):
variance = [pow(row[i]-means[i], 2) for row in dataset]
stdevs[i] = sum(variance)
stdevs = [sqrt(x/(float(len(dataset)-1))) for x in stdevs]
return stdevs

# standardize dataset
def standardize_dataset(dataset, means, stdevs):
for row in dataset:
for i in range(len(row)):
row[i] = (row[i] – means[i]) / stdevs[i]

# Load pima-indians-diabetes dataset
filename = ‘pima-indians-diabetes.csv’
dataset = load_csv(filename)
print(‘Loaded data file {0} with {1} rows and {2} columns’).format(filename, len(dataset), len(dataset[0]))
# convert string columns to float
for i in range(len(dataset[0])):
str_column_to_float(dataset, i)
print(dataset[0])
# Estimate mean and standard deviation
means = column_means(dataset)
stdevs = column_stdevs(dataset, means)
# standardize dataset
standardize_dataset(dataset, means, stdevs)
print(dataset[0])

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

from csv import reader

from math import sqrt

 

# Load a CSV file

def load_csv(filename):

file = open(filename, “rb”)

lines = reader(file)

dataset = list(lines)

return dataset

 

# Convert string column to float

def str_column_to_float(dataset, column):

for row in dataset:

row[column] = float(row[column].strip())

 

# calculate column means

def column_means(dataset):

means = [0 for i in range(len(dataset[0]))]

for i in range(len(dataset[0])):

col_values = [row[i] for row in dataset]

means[i] = sum(col_values) / float(len(dataset))

return means

 

# calculate column standard deviations

def column_stdevs(dataset, means):

stdevs = [0 for i in range(len(dataset[0]))]

for i in range(len(dataset[0])):

variance = [pow(row[i]-means[i], 2) for row in dataset]

stdevs[i] = sum(variance)

stdevs = [sqrt(x/(float(len(dataset)-1))) for x in stdevs]

return stdevs

 

# standardize dataset

def standardize_dataset(dataset, means, stdevs):

for row in dataset:

for i in range(len(row)):

row[i] = (row[i] – means[i]) / stdevs[i]

 

# Load pima-indians-diabetes dataset

filename = ‘pima-indians-diabetes.csv’

dataset = load_csv(filename)

print(‘Loaded data file {0} with {1} rows and {2} columns’).format(filename, len(dataset), len(dataset[0]))

# convert string columns to float

for i in range(len(dataset[0])):

str_column_to_float(dataset, i)

print(dataset[0])

# Estimate mean and standard deviation

means = column_means(dataset)

stdevs = column_stdevs(dataset, means)

# standardize dataset

standardize_dataset(dataset, means, stdevs)

print(dataset[0])

Running the example prints the first row of the dataset, first in a raw format as loaded, and then standardized which allows us to see the difference for comparison.

Loaded data file pima-indians-diabetes.csv with 768 rows and 9 columns
[6.0, 148.0, 72.0, 35.0, 0.0, 33.6, 0.627, 50.0, 1.0]
[0.6395304921176576, 0.8477713205896718, 0.14954329852954296, 0.9066790623472505, -0.692439324724129, 0.2038799072674717, 0.468186870229798, 1.4250667195933604, 1.3650063669598067]

Loaded data file pima-indians-diabetes.csv with 768 rows and 9 columns

[6.0, 148.0, 72.0, 35.0, 0.0, 33.6, 0.627, 50.0, 1.0]

[0.6395304921176576, 0.8477713205896718, 0.14954329852954296, 0.9066790623472505, -0.692439324724129, 0.2038799072674717, 0.468186870229798, 1.4250667195933604, 1.3650063669598067]

3. When to Normalize and Standardize

Standardization is a scaling technique that assumes your data conforms to a normal distribution.

If a given data attribute is normal or close to normal, this is probably the scaling method to use.

It is good practice to record the summary statistics used in the standardization process, so that you can apply them when standardizing data in the future that you may want to use with your model.

Normalization is a scaling technique that does not assume any specific distribution.

If your data is not normally distributed, consider normalizing it prior to applying your machine learning algorithm.

It is good practice to record the minimum and maximum values for each column used in the normalization process, again, in case you need to normalize new data in the future to be used with your model.

Extensions

There are many other data transforms you could apply.

The idea of data transforms is to best expose the structure of your problem in your data to the learning algorithm.

It may not be clear what transforms are required upfront. A combination of trial and error and exploratory data analysis (plots and stats) can help tease out what may work.

Below are some additional transforms you may want to consider researching and implementing:

  • Normalization that permits a configurable range, such as -1 to 1 and more.
  • Standardization that permits a configurable spread, such as 1, 2 or more standard deviations from the mean.
  • Exponential transforms such as logarithm, square root and exponents.
  • Power transforms such as box-cox for fixing the skew in normally distributed data.

Review

In this tutorial, you discovered how to rescale your data for machine learning from scratch.

Specifically, you learned:

  • How to normalize data from scratch.
  • How to standardize data from scratch.
  • When to use normalization or standardization on your data.

Do you have any questions about scaling your data or about this post?
Ask your question in the comments below and I will do my best to answer.

Discover How to Code Algorithms From Scratch!

Machine Learning Algorithms From Scratch

No Libraries, Just Python Code.

…with step-by-step tutorials on real-world datasets

Discover how in my new Ebook:
Machine Learning Algorithms From Scratch

It covers 18 tutorials with all the code for 12 top algorithms, like:
Linear Regression, k-Nearest Neighbors, Stochastic Gradient Descent and much more…

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

See What’s Inside

error: Content is protected !!