Skip to content
Search
Generic filters
Exact matches only

Non-Linear Regression in R with Decision Trees

Last Updated on August 22, 2019

In this post, you will discover 8 recipes for non-linear regression with decision trees in R.

Each example in this post uses the longley dataset provided in the datasets package that comes with R.

The longley dataset describes 7 economic variables observed from 1947 to 1962 used to predict the number of people employed yearly.

Discover how to prepare data, fit machine learning models and evaluate their predictions in R with my new book, including 14 step-by-step tutorials, 3 projects, and full source code.

Let’s get started.

decision tree

Decision Tree
Photo by Katie Walker, some rights reserved

Classification and Regression Trees

Classification and Regression Trees (CART) split attributes based on values that minimize a loss function, such as sum of squared errors.

The following recipe demonstrates the recursive partitioning decision tree method on the longley dataset.

# load the package
library(rpart)
# load data
data(longley)
# fit model
fit <- rpart(Employed~., data=longley, control=rpart.control(minsplit=5))
# summarize the fit
summary(fit)
# make predictions
predictions <- predict(fit, longley[,1:6])
# summarize accuracy
mse <- mean((longley$Employed – predictions)^2)
print(mse)

# load the package

library(rpart)

# load data

data(longley)

# fit model

fit <- rpart(Employed~., data=longley, control=rpart.control(minsplit=5))

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley[,1:6])

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the rpart function and the rpart package.

Conditional Decision Trees

Condition Decision Trees are created using statistical tests to select split points on attributes rather than a loss function.

The following recipe demonstrates the condition inference trees method on the longley dataset.

# load the package
library(party)
# load data
data(longley)
# fit model
fit <- ctree(Employed~., data=longley, controls=ctree_control(minsplit=2,minbucket=2,testtype=”Univariate”))
# summarize the fit
summary(fit)
# make predictions
predictions <- predict(fit, longley[,1:6])
# summarize accuracy
mse <- mean((longley$Employed – predictions)^2)
print(mse)

# load the package

library(party)

# load data

data(longley)

# fit model

fit <- ctree(Employed~., data=longley, controls=ctree_control(minsplit=2,minbucket=2,testtype=”Univariate”))

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley[,1:6])

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the ctree function and the party package.

Need more Help with R for Machine Learning?

Take my free 14-day email course and discover how to use R on your project (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Model Trees

Model Trees create a decision tree and use a linear model at each node to make a prediction rather than using an average value.

The following recipe demonstrates the M5P Model Tree method on the longley dataset.

# load the package
library(RWeka)
# load data
data(longley)
# fit model
fit <- M5P(Employed~., data=longley)
# summarize the fit
summary(fit)
# make predictions
predictions <- predict(fit, longley[,1:6])
# summarize accuracy
mse <- mean((longley$Employed – predictions)^2)
print(mse)

# load the package

library(RWeka)

# load data

data(longley)

# fit model

fit <- M5P(Employed~., data=longley)

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley[,1:6])

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the M5P function and the RWeka package.

Rule System

Rule Systems can be crated by extracting and simplifying the rules from a decision tree.

The following recipe demonstrates the M5Rules Rule System on the longley dataset.

# load the package
library(RWeka)
# load data
data(longley)
# fit model
fit <- M5Rules(Employed~., data=longley)
# summarize the fit
summary(fit)
# make predictions
predictions <- predict(fit, longley[,1:6])
# summarize accuracy
mse <- mean((longley$Employed – predictions)^2)
print(mse)

# load the package

library(RWeka)

# load data

data(longley)

# fit model

fit <- M5Rules(Employed~., data=longley)

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley[,1:6])

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the M5Rules function and the RWeka package.

Bagging CART

Bootstrapped Aggregation (Bagging) is an ensemble method that creates multiple models of the same type from different sub-samples of the same dataset. The predictions from each separate model are combined together to provide a superior result. This approach has shown participially effective for high-variance methods such as decision trees.

The following recipe demonstrates bagging applied to the recursive partitioning decision tree.

# load the package
library(ipred)
# load data
data(longley)
# fit model
fit <- bagging(Employed~., data=longley, control=rpart.control(minsplit=5))
# summarize the fit
summary(fit)
# make predictions
predictions <- predict(fit, longley[,1:6])
# summarize accuracy
mse <- mean((longley$Employed – predictions)^2)
print(mse)

# load the package

library(ipred)

# load data

data(longley)

# fit model

fit <- bagging(Employed~., data=longley, control=rpart.control(minsplit=5))

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley[,1:6])

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the bagging function and the ipred package.

Random Forest

Random Forest is variation on Bagging of decision trees by reducing the attributes available to making a tree at each decision point to a random sub-sample. This further increases the variance of the trees and more trees are required.

# load the package
library(randomForest)
# load data
data(longley)
# fit model
fit <- randomForest(Employed~., data=longley)
# summarize the fit
summary(fit)
# make predictions
predictions <- predict(fit, longley[,1:6])
# summarize accuracy
mse <- mean((longley$Employed – predictions)^2)
print(mse)

# load the package

library(randomForest)

# load data

data(longley)

# fit model

fit <- randomForest(Employed~., data=longley)

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley[,1:6])

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the randomForest function and the randomForest package.

Gradient Boosted Machine

Boosting is an ensemble method developed for classification for reducing bias where models are added to learn the misclassification errors in existing models. It has been generalized and adapted in the form of Gradient Boosted Machines (GBM) for use with CART decision trees for classification and regression.

# load the package
library(gbm)
# load data
data(longley)
# fit model
fit <- gbm(Employed~., data=longley, distribution=”gaussian”)
# summarize the fit
summary(fit)
# make predictions
predictions <- predict(fit, longley)
# summarize accuracy
mse <- mean((longley$Employed – predictions)^2)
print(mse)

# load the package

library(gbm)

# load data

data(longley)

# fit model

fit <- gbm(Employed~., data=longley, distribution=”gaussian”)

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley)

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the gbm function and the gbm package.

Cubist

Cubist decision trees are another ensemble method. They are constructed like model trees but involve a boosting-like procedure called committees that re rule-like models.

# load the package
library(Cubist)
# load data
data(longley)
# fit model
fit <- cubist(longley[,1:6], longley[,7])
# summarize the fit
summary(fit)
# make predictions
predictions <- predict(fit, longley[,1:6])
# summarize accuracy
mse <- mean((longley$Employed – predictions)^2)
print(mse)

# load the package

library(Cubist)

# load data

data(longley)

# fit model

fit <- cubist(longley[,1:6], longley[,7])

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley[,1:6])

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the cubist function and the Cubist package.

Summary

In this post you discovered 8 recipes for decision trees for non-linear regression in R. Each recipe is ready for you to copy-and-paste into your own workspace and modify for your needs.

For more information see Chapter 8 of Applied Predictive Modeling by Kuhn and Johnson that provides an excellent introduction to non-linear regression with decision trees with R for beginners.

Discover Faster Machine Learning in R!

Master Machine Learning With R

Develop Your Own Models in Minutes

…with just a few lines of R code

Discover how in my new Ebook:
Machine Learning Mastery With R

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, build models, tuning, and much more…

Finally Bring Machine Learning To Your Own Projects

Skip the Academics. Just Results.

See What’s Inside

error: Content is protected !!