Last Updated on August 22, 2019

In this post you will discover 4 recipes for non-linear regression in R.

There are many advanced methods you can use for non-linear regression, and these recipes are but a sample of the methods you could use.

Discover how to prepare data, fit machine learning models and evaluate their predictions in R with my new book, including 14 step-by-step tutorials, 3 projects, and full source code.

Let’s get started.

Each example in this post uses the longley dataset provided in the datasets package that comes with R. The longley dataset describes 7 economic variables observed from 1947 to 1962 used to predict the number of people employed yearly.

What You Will Learn

## Multivariate Adaptive Regression Splines

Multivariate Adaptive Regression Splines (MARS) is a non-parametric regression method that models multiple nonlinearities in data using hinge functions (functions with a kink in them).

# load the package

library(earth)

# load data

data(longley)

# fit model

fit <- earth(Employed~., longley)

# summarize the fit

summary(fit)

# summarize the importance of input variables

evimp(fit)

# make predictions

predictions <- predict(fit, longley)

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

# load the package

library(earth)

# load data

data(longley)

# fit model

fit <- earth(Employed~., longley)

# summarize the fit

summary(fit)

# summarize the importance of input variables

evimp(fit)

# make predictions

predictions <- predict(fit, longley)

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the **earth** function and the earth package.

## Support Vector Machine

Support Vector Machines (SVM) are a class of methods, developed originally for classification, that find support points that best separate classes. SVM for regression is called Support Vector Regression (SVM).

# load the package

library(kernlab)

# load data

data(longley)

# fit model

fit <- ksvm(Employed~., longley)

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley)

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

# load the package

library(kernlab)

# load data

data(longley)

# fit model

fit <- ksvm(Employed~., longley)

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley)

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the **ksvm** function and the kernlab package.

### Need more Help with R for Machine Learning?

Take my free 14-day email course and discover how to use R on your project (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

## k-Nearest Neighbor

The k-Nearest Neighbor (kNN) does not create a model, instead it creates predictions from close data on-demand when a prediction is required. A similarity measure (such as Euclidean distance) is used to locate close data in order to make predictions.

# load the package

library(caret)

# load data

data(longley)

# fit model

fit <- knnreg(longley[,1:6], longley[,7], k=3)

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley[,1:6])

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

# load the package

library(caret)

# load data

data(longley)

# fit model

fit <- knnreg(longley[,1:6], longley[,7], k=3)

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, longley[,1:6])

# summarize accuracy

mse <- mean((longley$Employed – predictions)^2)

print(mse)

Learn more about the **knnreg** function and the caret package.

## Neural Network

A Neural Network (NN) is a graph of computational units that recieve inputs and transfer the result into an output that is passed on. The units are ordered into layers to connect the features of an input vector to the features of an output vector. With training, such as the Back-Propagation algorithm, neural networks can be designed and trained to model the underlying relationship in data.

# load the package

library(nnet)

# load data

data(longley)

x <- longley[,1:6]

y <- longley[,7]

# fit model

fit <- nnet(Employed~., longley, size=12, maxit=500, linout=T, decay=0.01)

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, x, type=”raw”)

# summarize accuracy

mse <- mean((y – predictions)^2)

print(mse)

# load the package

library(nnet)

# load data

data(longley)

x <- longley[,1:6]

y <- longley[,7]

# fit model

fit <- nnet(Employed~., longley, size=12, maxit=500, linout=T, decay=0.01)

# summarize the fit

summary(fit)

# make predictions

predictions <- predict(fit, x, type=”raw”)

# summarize accuracy

mse <- mean((y – predictions)^2)

print(mse)

Learn more about the **nnet** function and the nnet package.

## Summary

In this post you discovered 4 non-linear regression methods with recipes that you can copy-and-paste for your own problems.

For more information see Chapter 7 of Applied Predictive Modeling by Kuhn and Johnson that provides an excellent introduction to non-linear regression with R for beginners.

## Discover Faster Machine Learning in R!

#### Develop Your Own Models in Minutes

…with just a few lines of R code

Discover how in my new Ebook:

Machine Learning Mastery With R

Covers **self-study tutorials** and **end-to-end projects** like:

Loading data, visualization, build models, tuning, and much more…

#### Finally Bring Machine Learning To Your Own Projects

Skip the Academics. Just Results.

See What’s Inside

#### About Jason Brownlee

Jason Brownlee, PhD is a machine learning specialist who teaches developers how to get results with modern machine learning methods via hands-on tutorials.